anshmajumdar121/context-optimizer
Reduce Claude AI token consumption by 5x-27x using prompt-native workflows and structural code manifests
Platform-specific configuration:
{
"mcpServers": {
"context-optimizer": {
"command": "npx",
"args": [
"-y",
"context-optimizer"
]
}
}
}Add the config above to .claude/settings.json under the mcpServers key.
> Reduce token consumption by 5xโ27x in Claude Desktop/Web using prompt-native workflows + a lightweight local manifest generator.
[](https://opensource.org/licenses/MIT) [](https://www.python.org/downloads/) [](https://claude.ai) [](CONTRIBUTING.md)
๐ [Live Preview](https://anshmajumdar121.github.io/context-optimizer/) โ See the interactive documentation
---
Claude has a 200K token context window โ but burning 20K tokens just to show a directory structure is wasteful. This toolkit teaches Claude to fetch only what it needs, compress what it sees, and reason structurally instead of reading raw files.
No API hacks. No leaked code. No reverse engineering. Just official Claude features (Custom Instructions + Projects + Knowledge) and a lightweight local indexer.
Most AI tools waste tokens by reading your entire project. Context Optimizer uses a structural graph to fetch only what matters.
*Comparison: Without Graph (13,205 tokens) vs With Graph (1,928 tokens)*
---
| Scenario | Before (tokens) | After (tokens) | Reduction | |----------|----------------|----------------|-----------| | Code review (3 files) | ~18,000 | ~1,200 | 15x | | Debug a function | ~8,000 | ~400 | 20x | | Plan a feature (5+ files) | ~35,000 | ~1,800 | 19x | | Full monorepo analysis | ~80,000 | ~3,500 | 22x |
*Measured on Claude 3.5 Sonnet with typical prompts*
---
# 1. Clone the repo
git clone https://github.com/yourusername/context-optimiLoading reviews...