Pavelevich/llm-checker
Advanced CLI tool that scans your hardware and tells you exactly which LLM or sLLM models you can run locally, with full Ollama integration.
Intelligent Ollama Model Selector
AI-powered CLI that analyzes your hardware and recommends optimal LLM models. Deterministic scoring across 200+ dynamic models (35+ curated fallback) with hardware-calibrated memory estimation.
[](https://www.npmjs.com/package/llm-checker) [](https://www.npmjs.com/package/llm-checker) [](LICENSE) [](https://discord.gg/mnmYrA7T) [](https://nodejs.org/)
Start Here • Installation • Quick Start • Calibration Quick Start • Docs • Claude MCP • Commands • Scoring • Hardware • Discord
---
Choosing the right LLM for your hardware is complex. With thousands of model variants, quantization levels, and hardware configurations, finding the optimal model requires understanding memory bandwidth, VRAM limits, and performance characteristics.
LLM Checker solves this. It analyzes your system, scores every compatible model across four dimensions (Quality, Speed, Fit, Context), and delivers actionable recommendations in seconds.
---
| | Feature | Description | |:---:|---|---| | 200+ | Dynamic Model Pool | Uses full scraped Ollama catalog/variants when available (wi
Loading reviews...