MCP cover image
See in Github
2025-04-14

Gérer et exécuter les travaux MCP

2

Github Watches

0

Github Forks

2

Github Stars

MCPMan (MCP Manager)

MCPMan orchestrates interactions between LLMs and Model Context Protocol (MCP) servers, making it easy to create powerful agentic workflows.

Quick Start

Run MCPMan instantly without installing using uvx:

# Use the calculator server to perform math operations
uvx mcpman -c server_configs/calculator_server_mcp.json -i openai -m gpt-4.1-mini -p "What is 1567 * 329 and then divide by 58?"

# Use the datetime server to check time in different timezones
uvx mcpman -c server_configs/datetime_server_mcp.json -i gemini -m gemini-2.0-flash-001 -p "What time is it right now in Tokyo, London, and New York?"

# Use the filesystem server with Ollama for file operations
uvx mcpman -c server_configs/filesystem_server_mcp.json -i ollama -m llama3:8b -p "Create a file called example.txt with a sample Python function, then read it back to me"

# Use the filesystem server with LMStudio's local models
uvx mcpman -c server_configs/filesystem_server_mcp.json -i lmstudio -m qwen2.5-7b-instruct-1m -p "Create a simple JSON file with sample data and read it back to me"

You can also use uv run for quick one-off executions directly from GitHub:

uv run github.com/ericflo/mcpman -c server_configs/calculator_server_mcp.json -i openai -m gpt-4.1-mini -p "What is 256 * 432?"

Core Features

  • One-command setup: Manage and launch MCP servers directly
  • Tool orchestration: Automatically connect LLMs to any MCP-compatible tool
  • Detailed logging: Structured JSON logs for every interaction with run ID tracking
  • Log replay: Visualize previous conversations with the mcpreplay tool
  • Multiple LLM support: Works with OpenAI, Google Gemini, Ollama, LMStudio and more
  • Flexible configuration: Supports stdio and SSE server communication

Installation

# Install with pip
pip install mcpman

# Install with uv
uv pip install mcpman

# Install from GitHub
uvx pip install git+https://github.com/ericflo/mcpman.git

Basic Usage

# Run mode (default)
mcpman -c <CONFIG_FILE> -i <IMPLEMENTATION> -m <MODEL> -p "<PROMPT>"

# Replay mode
mcpman --replay [--replay-file <LOG_FILE>]

Examples:

# Use local models with Ollama for filesystem operations
mcpman -c ./server_configs/filesystem_server_mcp.json \
       -i ollama \
       -m codellama:13b \
       -p "Create a simple bash script that counts files in the current directory and save it as count.sh"

# Use OpenAI with multi-server config
mcpman -c ./server_configs/multi_server_mcp.json \
       -i openai \
       -m gpt-4.1-mini \
       -s "You are a helpful assistant. Use tools effectively." \
       -p "Calculate 753 * 219 and tell me what time it is in Sydney, Australia"

# Replay the most recent conversation
mcpman --replay

# Replay a specific log file
mcpman --replay --replay-file ./logs/mcpman_20250422_142536.jsonl

Server Configuration

MCPMan uses JSON configuration files to define the MCP servers. Examples:

Calculator Server:

{
  "mcpServers": {
    "calculator": {
      "command": "python",
      "args": ["-m", "mcp_servers.calculator"],
      "env": {}
    }
  }
}

DateTime Server:

{
  "mcpServers": {
    "datetime": {
      "command": "python",
      "args": ["-m", "mcp_servers.datetime_utils"],
      "env": {}
    }
  }
}

Filesystem Server:

{
  "mcpServers": {
    "filesystem": {
      "command": "python",
      "args": ["-m", "mcp_servers.filesystem_ops"],
      "env": {}
    }
  }
}

Key Options

Option Description
-c, --config <PATH> Path to MCP server config file
-i, --implementation <IMPL> LLM implementation (openai, gemini, ollama, lmstudio)
-m, --model <MODEL> Model name (gpt-4.1-mini, gemini-2.0-flash-001, llama3:8b, qwen2.5-7b-instruct-1m, etc.)
-p, --prompt <PROMPT> User prompt (text or file path)
-s, --system <MESSAGE> Optional system message
--base-url <URL> Custom endpoint URL
--temperature <FLOAT> Sampling temperature (default: 0.7)
--max-tokens <INT> Maximum response tokens
--no-verify Disable task verification
--strict-tools Enable strict mode for tool schemas (default)
--no-strict-tools Disable strict mode for tool schemas
--replay Run in replay mode to visualize a previous conversation log
--replay-file <PATH> Path to the log file to replay (defaults to latest log)

API keys are set via environment variables: OPENAI_API_KEY, GEMINI_API_KEY, etc.
Tool schema behavior can be configured with the MCPMAN_STRICT_TOOLS environment variable.

Why MCPMan?

  • Standardized interaction: Unified interface for diverse tools
  • Simplified development: Abstract away LLM-specific tool call formats
  • Debugging support: Detailed JSONL logs for every step in the agent process
  • Local or cloud: Works with both local and cloud-based LLMs

Currently Supported LLMs

  • OpenAI (GPT-4.1, GPT-4.1-mini, GPT-4.1-nano)
  • Anthropic Claude (claude-3-7-sonnet-20250219, etc.)
  • Google Gemini (gemini-2.0-flash-001, etc.)
  • OpenRouter
  • Ollama (llama3, codellama, etc.)
  • LM Studio (Qwen, Mistral, and other local models)

Development Setup

# Clone and setup
git clone https://github.com/ericflo/mcpman.git
cd mcpman

# Create environment and install deps
uv venv
source .venv/bin/activate  # Linux/macOS
# or .venv\Scripts\activate  # Windows
uv pip install -e ".[dev]"

# Run tests
pytest tests/

Project Structure

  • src/mcpman/: Core source code
  • mcp_servers/: Example MCP servers for testing
  • server_configs/: Example configuration files
  • logs/: Auto-generated structured JSONL logs

License

Licensed under the Apache License 2.0.

相关推荐

  • Aurity Ltd
  • Create and Publish Business Websites in seconds. AI will gather all the details about your website and generate link to your website.

  • Convincible Ltd
  • You're in a stone cell – can you get out? A classic choose-your-adventure interactive fiction game, based on a meticulously-crafted playbook. With a medieval fantasy setting, infinite choices and outcomes, and dice!

  • John Rafferty
  • Text your favorite pet, after answering 10 questions about their everyday lives!

  • Ian O'Connell
  • Provide players' names or enter Quickstart to start the game!

  • analogchat.com
  • Efficient Spotify assistant for personalized music data.

  • Matthieu Savioux
  • Evaluates language quality of texts, responds with a numerical score between 50-150.

  • Liaozhaohe
  • - Talk with your own cat-girl maid as in visual novels!

  • WangRongsheng
  • 🧑‍🚀 全世界最好的 LLM 资料总结 (数据处理、模型训练、模型部署、 O1 模型、 MCP 、小语言模型、视觉语言模型) | Résumé des meilleures ressources LLM du monde.

  • av
  • Exécutez sans effort LLM Backends, API, Frontends et Services avec une seule commande.

  • langgenius
  • alibaba
  • 1Panel-dev
  • 🔥 1Panel fournit une interface Web intuitive et un serveur MCP pour gérer des sites Web, des fichiers, des conteneurs, des bases de données et des LLM sur un serveur Linux.

  • caio-moliveira
  • Ce projet a été créé pour démontrer comment nous pouvons nous connecter avec différents protocoles de contexte de modèle (MCP).

  • Byaidu
  • PDF Traduction de papier scientifique avec formats conservés - 基于 AI 完整保留排版的 PDF 文档全文双语翻译 , 支持 Google / Deepl / Olllama / Openai 等服务 , 提供 CLI / GUI / MCP / DOCKER / ZOTERO

  • rulego
  • ⛓️RULEGO est un cadre de moteur de règle d'orchestration des composants de nouvelle génération légère, intégrée, intégrée et de nouvelle génération pour GO.

  • hkr04
  • SDK C ++ MCP (Protocole de contexte modèle léger)

  • AstrBotDevs
  • ✨ 易上手的多平台 llm 聊天机器人及开发框架 ✨ 平台支持 QQ 、 QQ 频道、 Télégramme 、微信、企微、飞书 | MCP 服务器、 Openai 、 Deepseek 、 Gemini 、硅基流动、月之暗面、 Olllama 、 Oneapi 、 Dify 等。附带 webui。

    Reviews

    5 (0)