Rigorous-ly/rigorously
Automated research quality assurance. Catches fabricated citations, overclaimed results, irreproducible numbers.
Platform-specific configuration:
{
"mcpServers": {
"rigorously": {
"command": "npx",
"args": [
"-y",
"rigorously"
]
}
}
}Add the config above to .claude/settings.json under the mcpServers key.
[](https://pypi.org/project/rigorously/) [](https://python.org) [](LICENSE)
Automated research quality assurance.
Rigorously catches the mistakes that slip past manual review — fabricated citations, overclaimed results, irreproducible numbers, and statistical misinterpretations. One command. Eight checks.
> Tested on: Python CLI, Claude Code, pre-commit hooks · Compatible with: 16+ AI coding platforms via the Agent Skills standard and MCP
Citation errors appear in 25% of published papers. "Statistically significant" gets misused in half of biomedical literature. Overclaimed results are the #1 reason reviewers reject computational papers. Manual review catches some of these. Rigorously catches the rest.
pip install rigorously
rigorously check paper.texJournals like PLOS, Nature, and Science desk-reject up to 40% of submissions for preventable issues: citation errors, missing sections, inconsistent numbers. Each round-trip costs weeks. Rigorously catches these in 3 seconds before a reviewer sees your paper. No account. No cloud. No data leaves your machine.
| Check | What It Does | |-------|-------------| | Citation Verification | Verifies every bib entry against CrossRef — DOIs, titles, authors, journals | | Overclaim Detection | Flags "proven," "validated," "novel," "impossible" — suggests precise alternatives | | Number Consistency | Cross-checks every number across abstract, body, tables, and captions | | Parameter Auditing | Verifies code parameters match paper claims and docstrings | | Statistical Auditing | Checks p-values, sample sizes, test appropriateness, power analysis | | **Evidence Mapp
Loading reviews...