loading…
Search for a command to run...
loading…
Bridge to local Ollama LLM server. Run Llama, Mistral, Qwen and other local models through MCP.
Bridge to local Ollama LLM server. Run Llama, Mistral, Qwen and other local models through MCP.
MCP Server - Bridge to local Ollama LLM server.
Part of the HumoticaOS / SymbAIon ecosystem.
pip install mcp-server-ollama-bridge
Add to your claude_desktop_config.json:
{
"mcpServers": {
"ollama": {
"command": "mcp-server-ollama-bridge",
"env": {
"OLLAMA_HOST": "http://localhost:11434"
}
}
}
}
docker build -t mcp-server-ollama-bridge .
docker run -i -e OLLAMA_HOST=http://host.docker.internal:11434 mcp-server-ollama-bridge
| Variable | Default | Description |
|---|---|---|
OLLAMA_HOST |
http://localhost:11434 |
Ollama server URL |
MIT
One Love, One fAmIly!
This package is officially distributed via:
Note: Third-party directories may list this package but are not official or verified distribution channels for Humotica software.
Add this to claude_desktop_config.json and restart Claude Desktop.
{
"mcpServers": {
"jaspertvdm-mcp-server-ollama-bridge": {
"command": "npx",
"args": []
}
}
}Web content fetching and conversion for efficient LLM usage.
Retrieval from AWS Knowledge Base using Bedrock Agent Runtime.
Provides auto-configuration for setting up an MCP server in Spring Boot applications.
A very streamlined mcp client that supports calling and monitoring stdio/sse/streamableHttp, and can also view request responses through the /logs page. It also