vcanchik/robotmem
Robot memory
Platform-specific configuration:
{
"mcpServers": {
"robotmem": {
"command": "npx",
"args": [
"-y",
"robotmem"
]
}
}
}Add the config above to .claude/settings.json under the mcpServers key.
> Your robot ran 1000 experiments, starting from scratch every time. robotmem stores episode experiences — parameters, trajectories, outcomes — and retrieves the most relevant ones to guide future decisions.
FetchPush experiment: +25% success rate improvement (42% → 67%), CPU-only, reproducible in 5 minutes.
<p align="center"> </p>
pip install robotmemnpm install -g @muscular/robotmem
robotmemThe npm package is a thin wrapper around the Python CLI. It requires Python 3.10+ and installs robotmem on first run if it is not already available in the selected interpreter.
from robotmem import learn, recall, save_perception, start_session, end_session
# Start an episode
session = start_session(context='{"robot_id": "arm-01", "task": "push"}')
# Record experience
learn(
insight="grip_force=12.5N yields highest grasp success rate",
context='{"params": {"grip_force": {"value": 12.5, "unit": "N"}}, "task": {"success": true}}'
)
# Retrieve experiences (structured filtering + spatial nearest-neighbor)
memories = recall(
query="grip force parameters",
context_filter='{"task.success": true}',
spatial_sort='{"field": "spatial.position", "target": [1.3, 0.7, 0.42]}'
)
# Store perception data
save_perception(
description="Grasp trajectory: 30 steps, success",
perception_type="procedural",
data='{"sampled_actions": [[0.1, -0.3, 0.05, 0.8], ...]}'
)
# End episode (auto-consolidation + proactive recall)
end_session(session_id=session["session_id"])| API | Purpose | |-----|---------| | learn | Record physical experiences (parameters / strategies / lessons) | | recall | Retrieve experiences — BM25 + vector hybrid search with context_filter and spatial_sort | | save_perception | Store perception / trajectory / force data (visu
Loading reviews...