barrymister/ai-model-selector-mcp
MCP server for AI model metadata — query 76+ models by capability, compatibility, and task fit
Platform-specific configuration:
{
"mcpServers": {
"ai-model-selector-mcp": {
"command": "npx",
"args": [
"-y",
"ai-model-selector-mcp"
]
}
}
}Add the config above to .claude/settings.json under the mcpServers key.
MCP server that gives AI assistants structured access to model metadata for 76+ AI models across Ollama, Claude, and OpenRouter.
Query capabilities, check compatibility, compare models, and get task-based recommendations — all via the Model Context Protocol.
---
Add to your project's .mcp.json:
{
"mcpServers": {
"ai-model-selector": {
"command": "npx",
"args": ["-y", "ai-model-selector-mcp@latest"]
}
}
}Restart Claude Code. The tools are now available.
Any MCP-compatible client can connect via stdio:
npx ai-model-selector-mcp---
Claude Code (or any MCP client)
│
│ JSON-RPC over stdio
▼
ai-model-selector-mcp
│
│ imports catalog data
▼
ai-model-selector/catalog
(76+ model entries with capabilities,
parameter sizes, exclusion rules)The MCP server wraps the ai-model-selector catalog — a curated dataset of AI model metadata. No external API calls, no database, no network access. All data is bundled.
---
get_model_metadataLook up a single model's capabilities, parameter size, and exclusion rules.
Input: { modelId: "gemma3:12b" }
Output: { capabilities: ["general", "writing"], description: "Google all-rounder", parameterSize: "12B" }filter_modelsFilter the catalog by capability tags and/or mode compatibility.
Input: { capabilities: ["coding"], excludeMode: "json-output" }
Output: { models: [...], count: 5 }check_compatibilityPre-flight check: is this model compatible with a given mode?
Input: { modelId: "phi4-reasoning", mode: "json-output" }
Output: { compatible: false, reason: "Model excluded from json-output mode...", model: {...} }compare_modelsSide-by-side comparison of 2+ models — shared and unique capa
Loading reviews...