ypollak2/llm-router
Smart LLM router for Claude Code — auto-picks cheapest model per task, routes within Claude subscription first. 70-85% cost savings.
Platform-specific configuration:
{
"mcpServers": {
"llm-router": {
"command": "npx",
"args": [
"-y",
"llm-router"
]
}
}
}Add the config above to .claude/settings.json under the mcpServers key.
<p align="center"> </p>
<h1 align="center">LLM Router</h1>
<p align="center"> <strong>One MCP server. Every AI model. Smart routing.</strong> </p>
<p align="center"> Route text, image, video, and audio tasks to 20+ AI providers — automatically picking the best model for the job based on your budget and active profile. </p>
<p align="center"> <a href="#quick-start">Quick Start</a> • <a href="#how-it-works">How It Works</a> • <a href="#providers">Providers</a> • <a href="#mcp-tools">Tools</a> • <a href="#configuration">Configuration</a> • <a href="docs/PROVIDERS.md">Provider Setup</a> </p>
<p align="center"> <a href="https://github.com/ypollak2/llm-router/actions"></a> <a href="https://github.com/ypollak2/llm-router/blob/main/LICENSE"></a> <a href="https://pypi.org/project/claude-code-llm-router/"></a> </p>
---
You use Claude Code. You also have GPT-4o, Gemini, Perplexity, DALL-E, Runway, ElevenLabs — but switching between them is manual, slow, and expensive.
LLM Router gives your AI assistant one unified interface to all of them — and automatically picks the right one based on what you're doing and what you can afford.
You: "Research the latest AI funding rounds"
Router: → Perplexity Sonar Pro (search-augmented, bestLoading reviews...