loading…
Search for a command to run...
loading…
A CLI host application that enables Large Language Models (LLMs) to interact with external tools through the Model Context Protocol (MCP).
A CLI host application that enables Large Language Models (LLMs) to interact with external tools through the Model Context Protocol (MCP).
A CLI host application that enables Large Language Models (LLMs) to interact with external tools through the Model Context Protocol (MCP). Currently supports Openai, Azure Openai, Deepseek and Ollama models.
English | 简体中文

Prompts in MCP server: LinkResources in MCP server: Linkexport OPENAI_API_KEY='your-api-key'
By default for Openai the base_url is "https://api.openai.com/v1"
For deepseek it's "https://api.deepseek.com", you can change it by --base-url
ollama pull mistral
ollama serve
export AZURE_OPENAI_DEPLOYMENT='your-azure-deployment'
export AZURE_OPENAI_API_KEY='your-azure-openai-api-key'
export AZURE_OPENAI_API_VERSION='your-azure-openai-api-version'
export AZURE_OPENAI_ENDPOINT='your-azure-openai-endpoint'
export GEMINI_API_KEY='your-gemini-api-token'
pip install mcp-cli-host
MCPCLIHost will automatically find configuration file at ~/.mcp.json. You can also specify a custom location using the --config flag:
{
"mcpServers": {
"sqlite": {
"command": "uvx",
"args": [
"mcp-server-sqlite",
"--db-path",
"/tmp/foo.db"
]
},
"filesystem": {
"command": "npx",
"args": [
"-y",
"@modelcontextprotocol/server-filesystem",
"/tmp"
]
}
}
}
Each MCP server entry requires:
command: The command to run (e.g., uvx, npx) args: Array of arguments for the command:mcp-server-sqlite with database path@modelcontextprotocol/server-filesystem with directory path{
"mcpServers": {
"github": {
"url": "https://api.githubcopilot.com/mcp/",
"headers": {"Authorization": "Bearer <your PAT>"}
}
}
}
MCPCLIHost is a CLI tool that allows you to interact with various AI models through a unified interface. It supports various tools through MCP servers.
Models can be specified using the --model (-m) flag:
deepseek:deepseek-chatopenai:gpt-4ollama:modelnameazure:gpt-4-0613gemini:gemini-2.5-flash# Use Ollama with Qwen model
mcpclihost -m ollama:qwen2.5:3b
# Use Deepseek
mcpclihost -m deepseek:deepseek-chat --sys-prompt 'You are a slightly playful assistant, please answer questions in a cute tone!'
--config string: Config file location (default is $HOME/mcp.json)--debug: Enable debug logging--message-window int: Number of messages to keep in context (default: 10)-m, --model string: Model to use (format: provider:model) (default "anthropic:claude-3-5-sonnet-latest")--base-url string: Base URL for OpenAI API (defaults to api.openai.com)--roots string: MCP clients to expose filesystem “roots” to servers--sys-prompt string: System promptWhile chatting, you can use:
/help: Show available commands/tools: List all available tools/exclude_tool tool_name: Exclude specific tool from the conversation/resources: List all available resources/get_resource: Get specific resources by uri, example: /get_resource resource_uri/prompts: List all available prompts/get_prompt: Get specific prompt by name, example: /get_prompt prompt_name/servers: List configured MCP servers/history: Display conversation history/quit: Exit at any timeMCPCliHost can work with any MCP-compliant server. For examples and reference implementations, see the MCP Servers Repository.
Sampling and Elicitation, when typing "Ctrl+c", the process will crash with something like asyncio.exceptions.CancelledError, will be resolved later.This project is licensed under the Apache 2.0 License - see the LICENSE file for details.
Add this to claude_desktop_config.json and restart Claude Desktop.
{
"mcpServers": {
"mcp-cli-client": {
"command": "npx",
"args": []
}
}
}