DavidEasden/local_vision
An MCP that uses local VLM to convert images into descriptive text for large models without visual capabilities
Platform-specific configuration:
{
"mcpServers": {
"local_vision": {
"command": "npx",
"args": [
"-y",
"local_vision"
]
}
}
}Add the config above to .claude/settings.json under the mcpServers key.
A native visual analysis tool based on MCP (Model Context Protocol) that uses LM Studio's vision model to analyze images.
Make sure Conda is installed and create an 'mlx' environment:
conda create -n mlx python=3.10
conda activate mlxTo install the required dependencies in the project directory:
pip install mcp httpxThe following environment variables can be set up:
# Set the LM Studio service address (default: http://localhost:11434)
export LM_STUDIO_URL="http://localhost:11434"
# Set the visual model name (default: qwen3.5:2b-bf16)
export VISION_MODEL="qwen3.5:2b-bf16"
# Set the Conda environment name (default: mlx)
export MCP_CONDA_ENV="mlx"python main.pyAdd in your ``opencode.json``:
{
"mcp": {
"local_vision": {
"type": "local",
"command": ["python", "your path"],
"environment": {
"LM_STUDIO_URL": "http://localhost:11434",
"VISION_MODEL": "qwen3Loading reviews...