vola-trebla/toad-eye
OpenTelemetry observability for LLM systems — auto-instrumentation (OpenAI, Anthropic, Gemini, Vercel AI SDK), cost tracking, budget guards, agent tracing, OTel GenAI semconv, 8 Grafana dashboards. One line of code 🐸👁️
Platform-specific configuration:
{
"mcpServers": {
"toad-eye": {
"command": "npx",
"args": [
"-y",
"toad-eye"
]
}
}
}Add the config above to .claude/settings.json under the mcpServers key.
Observability for MCP servers and LLM applications.
One line of code. Full traces, metrics, and Grafana dashboards. Self-hosted. Privacy-first. No vendor lock-in.
[](https://www.npmjs.com/package/toad-eye)
---
Add observability to any MCP server in 2 lines:
import { McpServer } from "@modelcontextprotocol/sdk/server/mcp.js";
import { initObservability } from "toad-eye";
import { toadEyeMiddleware } from "toad-eye/mcp";
initObservability({ serviceName: "my-mcp-server" });
const server = new McpServer({ name: "my-server", version: "1.0.0" });
toadEyeMiddleware(server);
// Every tool call, resource read, and prompt is now traced.
// Spans appear in Jaeger. Metrics flow to Prometheus. Dashboards ready in Grafana.Privacy by default — tool arguments and results are NOT recorded unless you opt in:
toadEyeMiddleware(server, {
recordInputs: true,
redactKeys: ["apiKey", "token"],
});Safe for stdio transport — OTel diagnostics are redirected to stderr.
Auto-instrument OpenAI, Anthropic, Gemini, and Vercel AI SDK — zero wrappers:
import { initObservability } from "toad-eye";
initObservability({
serviceName: "my-app",
instrument: ["openai", "anthropic"],
});
// Every SDK call is auto-traced — including streaming.npm install toad-eye
npx toad-eye init # scaffold observability configs
npx toad-eye up # start Grafana + ProLoading reviews...