MauricioPerera/ctt-shell
Universal AI agent framework — 1B models compose multi-step plans like 12B with structured context (CTT). 6 domains, 8 MCP tools, 167 tests, zero runtime deps.
Universal AI agent framework that makes 3B parameter models compose and execute multi-step plans like 12B models.
The thesis: structured context at inference time substitutes for parameter count — *Context-Time Training*.
[](https://nodejs.org/) [](https://www.typescriptlang.org/) [](https://opensource.org/licenses/MIT) []()
"Create a post, assign a ┌─────────────┐
category, and upload media" ───→ │ CTT Memory │ ← Knowledge, Skills, Memories
└──────┬──────┘
▼
┌─────────────┐
│ LLM (3B+) │ ← Few-shot context + anti-patterns
└──────┬──────┘
▼
┌─────────────┐
│ Guard Rails │ ← Normalize JSON, fix deps, retry
└──────┬──────┘
▼
┌─────────────┐
│ Execute │ ← Domain adapter (WordPress, n8n, ...)
└──────┬──────┘
▼
┌─────────────┐
│ Learn │ ← Save as Skill or Memory
└─────────────┘An agent receives a natural language goal, searches its CTT memory for relevant operations and patterns, asks any LLM to generate a plan, normalizes the output through battle-tested guard rails, executes it via a domain adapter, and learns from the
Loading reviews...