JeGwan/agent-anatomy
Interactive visualization of LLM agent internals — real API message flow (tool_use, tool_result, MCP, Skills). Agent Engineering = Context Engineering.
Platform-specific configuration:
{
"mcpServers": {
"agent-anatomy": {
"command": "npx",
"args": [
"-y",
"agent-anatomy"
]
}
}
}Add the config above to .claude/settings.json under the mcpServers key.
Interactive visualization of how LLM agents actually work — the real API message flow between User, Agent, and LLM.
[](https://jegwan.github.io/agent-anatomy/)
<p align="center"> </p>
> Agents aren't magic. They just assemble context. > > system_prompt + tools[] + messages[] → LLM API → if tool_use: execute → append tool_result → repeat. > > That loop is the entire "intelligence" of an LLM agent.
An interactive sequence diagram showing 6 turns of a real agent session with actual JSON payloads:
| Turn | What happens | Key insight | |------|-------------|-------------| | 1 | User request → Agent assembles context → LLM API call | system + tools[] + messages[] — that's all the LLM receives | | 2 | Tool result fed back → LLM decides next action | tool_result goes in as "role": "user" — there is no "tool" role | | 3 | Test failure → self-correction | Error logs in context → LLM can reason about failures | | 4 | MCP tool call | MCP tools are just mixed into tools[] — the LLM doesn't know MCP exists | | 5 | Skill invocation | A skill is just a prompt template injected into the user message | | 6 | Loop termination | No tool_use in response = agent stops the loop |
┌──────────┐ ┌──────────┐ ┌──────────┐
│ User │ ──→ │ Agent │ ──→ │ LLM │
│ (human) │ ←── │ (program)│ ←── │(only AI) │
└──────────┘ └──────────┘ └──────────┘Loading reviews...