daedalus/mcp-llm-gateway
MCP-compatible LLM gateway that proxies completion requests to downstream OpenAI-compatible providers.
Platform-specific configuration:
{
"mcpServers": {
"mcp-llm-gateway": {
"command": "npx",
"args": [
"-y",
"mcp-llm-gateway"
]
}
}
}Add the config above to .claude/settings.json under the mcpServers key.
> MCP-compatible LLM gateway that proxies completion requests to downstream OpenAI-compatible providers.
[](https://pypi.org/project/mcp-llm-gateway/) [](https://pypi.org/project/mcp-llm-gateway/) [](https://github.com/astral-sh/ruff)
mcp-name: io.github.daedalus/mcp-llm-gateway
pip install mcp-llm-gatewaySet the following environment variables:
DOWNSTREAM_URL: Base URL for the OpenAI-compatible downstream API (required)DEFAULT_MODEL: Default model to use for completions (required)MODEL_LIST_URL: URL to fetch available models from (optional, defaults to models.dev)API_KEY: Optional API key for downstream (passthrough)TIMEOUT: Request timeout in seconds (optional, default: 60)Run the MCP server with stdio transport:
mcp-llm-gatewayThe server exposes the following tools:
list_models(): List all available models from the remote endpointcomplete(prompt, model, max_tokens, temperature): Send a completion request to the downstream LLM providermodels://list: Returns the list of available modelsconfig://info: Returns current gateway configurationgit clone https://github.com/daedalus/mcp-llm-gateway.git
cd mcp-llm-gateway
pip install -e ".[test]"
# run tests
pytest
# format
ruff format src/ tests/
# lint
ruff check src/ tests/
# type check
mypy src/Model: Dataclass representing an available LLM modelCompletionRequest: Dataclass for completion request payloadsGatewayConfig: Dataclass for gateway configurationHTTPAdapter: HTTP client for downstream APLoading reviews...