zircote/fastmcp-lro
Large Result Offloading for MCP servers. Writes oversized tool responses to JSONL files, returning compact descriptors with extraction recipes.
Platform-specific configuration:
{
"mcpServers": {
"fastmcp-lro": {
"command": "npx",
"args": [
"-y",
"fastmcp-lro"
]
}
}
}Add the config above to .claude/settings.json under the mcpServers key.
[](https://github.com/zircote/fastmcp-lro/actions/workflows/ci.yml) [](https://www.python.org/downloads/) [](LICENSE) [](pyproject.toml)
Large Result Offloading for MCP servers — demand-driven context management for tool-augmented language models.
When MCP tool responses exceed a configurable character threshold, the full dataset is written to JSONL files and the tool returns a compact descriptor with file paths, schemas, jq recipes, and a caller-defined summary.
Based on the LRO specification.
<picture> <source media="(prefers-color-scheme: dark)" srcset=".github/social-preview-dark.svg"> </picture>
pip install fastmcp-lrofrom fastmcp_lro import lro_offload, OffloadSection
@lro_offload(
sections=[
OffloadSection(
key="results",
filename_prefix="search",
schema={"id": "string", "title": "string", "score": "number"},
),
],
inline_keys=["query", "total_count"],
session_id_key="request_id",
)
async def search_tool(query: str) -> dict:
return {
"request_id": "req-123",
"query": query,
"total_count": 500,
"results": [...] # large list
}from fastmcp_lro import LROOffloader, OffloadSection
offloader = LROOffloader(threshold=50000)
section = OffloadSection(
key="results",
filename_prefix="search",
schema={"id": "string", "score": "number"},
)
result = offloader.offload_if_needed(
data=Loading reviews...