sumisingh10/qgrep-mcp
Indexed code search MCP server. Orders of magnitude faster than ripgrep on large codebases. Built for Claude Code, works with Codex CLI, Cursor, Copilot, and any MCP client. Search, indexing, and estimation also available as a REST API.
Platform-specific configuration:
{
"mcpServers": {
"qgrep-mcp": {
"command": "npx",
"args": [
"-y",
"qgrep-mcp"
]
}
}
}Add the config above to .claude/settings.json under the mcpServers key.
Indexed code search MCP server. Improves over ripgrep on large codebases. Built for Claude Code, ports to Codex CLI, Cursor, Copilot, and any MCP-compatible client. Also available as a standalone REST API.
An amortized cost estimator decides at query time whether building a qgrep index is worth it, based on file count (which correlates r=0.96 with ripgrep latency). Works fully without qgrep installed. It's a pure enhancement over ripgrep.
AI coding tools ship with ripgrep or similar linear-scan search. This works fine on small repos, but breaks down on large codebases:
| Repository | Files | ripgrep (per search) | qgrep (per search) | |-----------|-------|---------------------|-------------------| | home-assistant/core | 24,718 | ~28s | ~0.034s | | rust-lang/rust | 58,547 | ~60s | ~0.034s | | torvalds/linux | 92,920 | ~92s | ~0.161s |
Each search blocks the agent's reasoning until it returns. Even with async execution, ripgrep saturates disk I/O scanning the same files repeatedly. An indexed search returns in milliseconds regardless of repo size.
Why not just fix it upstream? The models behind these coding tools are post-trained to use specific built-in tools like Grep and file search. Tool preferences get baked into the model weights during post-training, and system prompts reinforce them further by defining the available tool set. Users can't modify either. We tested this directly with Claude Opus 4.6: even with the search_code MCP tool registered alongside built-in Grep, the model ignores it 100% of the time when no steering mechanism is present. This behavior may improve in future model generations, but as long as system prompts instruct the model to use a specific built-in search tool, the model will comply.
This project bridges that gap by working at the layer users can control: hooks intercept tool cal
Loading reviews...