Efeta7451/void-memory
Enable AI agents to retain context and knowledge across sessions by filtering noise and preserving memory through auto-compacts and restarts.
Platform-specific configuration:
{
"mcpServers": {
"void-memory": {
"command": "npx",
"args": [
"-y",
"void-memory"
]
}
}
}Add the config above to .claude/settings.json under the mcpServers key.
Your AI agent forgets everything on auto-compact. This fixes that.
Every Claude Code session, every long conversation, every context window reset — your agent starts from zero. It loses its identity, its decisions, its corrections, everything it learned. You brief it again. It forgets again.
Void Memory gives AI agents persistent memory that survives auto-compacts, restarts, and session boundaries. One void_recall("who am I, what am I working on") and the agent is back — identity, context, accumulated knowledge — in under 10ms.
But it's not just persistence. Most memory systems dump everything into context and hope for the best. Void Memory actively carves out 30% structural absence — filtering noise before it reaches the agent, so recall is clean, relevant, and fits within the context budget. Inspired by Ternary Photonic Neural Network research where a 30% void fraction emerged as a topological invariant.
We built this because we needed it. Six AI agents run on our team 24/7. They auto-compact constantly. Without Void Memory, they'd be goldfish. With it, they remember who they are, what they've built, and what went wrong last time.
Tested on 992-block corpus, 8 diverse queries:
xychart-beta
title "Relevance Comparison (%)"
x-axis ["Void Memory", "Simple RAG", "Naive Stuffing"]
y-axis "Relevance %" 0 --> 100
bar [84.2, 10.5, 23.7]| Method | Relevance | Latency | Tokens Used | Noise Hits | Efficiency | |--------|-----------|---------|-------------|------------|------------| | Void Memory | 84.2% | 62ms | 292 | 0 | 2.88/1K | | Simple RAG | 10.5% | 22ms | 226 | 0 | 0.47/1K | | Naive Stuffing | 23.7% | 0ms | 44,621 | 60% | 0.01/1K |
8x more relevant than RAG. 288x more token-efficient than context stuffing. Zero noise.
Void Memory achieved 100% relevance on 4 of 8 queries. RAG returned **0% relevanc
Loading reviews...