rtk-ai/rtk
CLI proxy that reduces LLM token consumption by 60-90% on common dev commands. Single Rust binary, zero dependencies
Platform-specific configuration:
{
"mcpServers": {
"rtk": {
"command": "npx",
"args": [
"-y",
"rtk"
]
}
}
}Add the config above to .claude/settings.json under the mcpServers key.
<p align="center"> </p>
<p align="center"> <strong>High-performance CLI proxy that reduces LLM token consumption by 60-90%</strong> </p>
<p align="center"> <a href="https://github.com/rtk-ai/rtk/actions"></a> <a href="https://github.com/rtk-ai/rtk/releases"></a> <a href="https://opensource.org/licenses/MIT"></a> <a href="https://discord.gg/pvHdzAec"></a> <a href="https://formulae.brew.sh/formula/rtk"></a> </p>
<p align="center"> <a href="https://www.rtk-ai.app">Website</a> • <a href="#installation">Install</a> • <a href="docs/TROUBLESHOOTING.md">Troubleshooting</a> • <a href="ARCHITECTURE.md">Architecture</a> • <a href="https://discord.gg/pvHdzAec">Discord</a> </p>
<p align="center"> <a href="README.md">English</a> • <a href="README_fr.md">Francais</a> • <a href="README_zh.md">中文</a> • <a href="README_ja.md">日本語</a> • <a href="README_ko.md">한국어</a> • <a href="README_es.md">Espanol</a> </p>
---
rtk filters and compresses command outputs before they reach your LLM context. Single Rust binary, zero dependencies, <10ms overhead.
| Operation | Frequency | Standard | rtk | Savings | |-----------|-----------|----------|-----|---------| | ls / tree | 10x | 2,000 | 400 | -80% | | cat / read | 20x | 40,000 | 12,000 | -70% | | grep / rg | 8x | 16,000 | 3,200 | -80% | | git status | 10x | 3,000 | 600 | -80% | | git diff
Loading reviews...