loaditout.ai
SkillsPacksTrendingLeaderboardAPI DocsBlogSubmitRequestsCompareAgentsXPrivacyDisclaimer
{}loaditout.ai
Skills & MCPPacksBlog

agent-memory

MCP Tool

Keshab0310/agent-memory

Save 60-90% on LLM token costs with intelligent memory compression for multi-agent systems

Install

$ npx loaditout add Keshab0310/agent-memory

Platform-specific configuration:

.claude/settings.json
{
  "mcpServers": {
    "agent-memory": {
      "command": "npx",
      "args": [
        "-y",
        "agent-memory"
      ]
    }
  }
}

Add the config above to .claude/settings.json under the mcpServers key.

About

agent-memory

Save 60-90% on LLM token costs with intelligent memory compression for multi-agent systems.

agent-memory compresses raw LLM tool output into structured observations, shares context across agents via a memory bus, and injects only relevant memory into each prompt — keeping your token budget under control.

---

The Problem

Running 5+ concurrent LLM agents burns tokens fast:

  • Each agent re-reads the same files, re-discovers the same context
  • Raw tool output (file reads, command results) consumes thousands of tokens
  • No shared memory means redundant API calls across agents
  • You hit rate limits and token budgets within minutes
The Solution

agent-memory sits between your agents and their context window:

Raw Tool Output (5,000 tokens)
  -> Observation Compression (500 tokens)
    -> Shared Memory Bus (SQLite + FTS5)
      -> Budget-Controlled Context Injection (8,000 token cap)

Tested results: 66-94% token savings, 3-74x compression ratio.

---

Quick Start
As a Python SDK
pip install agent-memory
from agent_memory import MemoryStore, ContextBuilder

# Initialize
memory = MemoryStore("./my_project.db")

# Store a compressed observation
memory.store_observation(Observation(
    agent_id="researcher-1",
    project="my-app",
    title="Found pagination bug in /users endpoint",
    narrative="The API returns 500 when page > 100 due to missing LIMIT clause",
    facts=["Max page size is 100", "No server-side validation"],
    concepts=["api", "bug", "pagination"],
))

# Build context for another agent (token-budgeted)
builder = ContextBuilder(memory)
context = builder.build(
    project="my-app",
    agent_id="coder-1",
    task_description="Fix the pagination bug",
)
# -> Returns compressed context within 8000 token budget
# -> Includes researcher-1's findings automatically
As a Claude Code Plugin
Step 1: Install
# Add the marketplace
claude plugin marketplace add Keshab03

Tags

ai-agentsanthropicclaudellmmcpmemorymulti-agentollamaprompt-cachingtoken-optimization

Reviews

Loading reviews...

Quality Signals

0
Installs
Last updated12 days ago
Security: AREADME
New

Safety

Risk Levelmedium
Data Access
read
Network Accessnone

Details

Sourcegithub-crawl
Last commit4/2/2026
View on GitHub→

Embed Badge

[![Loaditout](https://loaditout.ai/api/badge/Keshab0310/agent-memory)](https://loaditout.ai/skills/Keshab0310/agent-memory)