MCP cover image
See in Github
2025-06-03

Autonomous debugging agent MCP server

505

Github Watches

36

Github Forks

505

Github Stars

Deebo: Your AI Agent's Debugging Copilot

CI Status npm version GitHub stars

Deebo is an agentic debugging copilot for your AI coding agent that speeds up time-to-resolution by 10x. If your main coding agent is like a single-threaded process, Deebo introduces multi-threadedness to your development workflow.

feedback, questions/support? check out Deebo Guide below, or dm me on x @sriramenn

If you think your team can benefit from Deebo, we’d love to hear from you. We’re partnering with teams who use AI agents to write production code and want to maximize their productivity. Reach out for a live walkthrough, custom setup support, or to explore early access to enterprise features.

Quick Install

npx deebo-setup@latest
Manual Configuration

After installing with deebo-setup, create a configuration file at your coding agent's specified location with the following content. First, add the guide server (which provides help documentation even if the main installation fails):

{
  "servers": {
    "deebo-guide": {
      "command": "node",
      "args": [
        "--experimental-specifier-resolution=node",
        "--experimental-modules",
        "/Users/[your-name]/.deebo/guide-server.js"
      ],
      "env": {},
      "transportType": "stdio"
    },
    "deebo": {
      "command": "node",
      "args": [
        "--experimental-specifier-resolution=node",
        "--experimental-modules",
        "--max-old-space-size=4096",
        "/Users/[your-name]/.deebo/build/index.js"
      ],
      "env": {
        "NODE_ENV": "development",
        "USE_MEMORY_BANK": "true",
        "MOTHER_HOST": "openrouter",
        "MOTHER_MODEL": "anthropic/claude-3.5-sonnet",
        "SCENARIO_HOST": "openrouter",
        "SCENARIO_MODEL": "deepseek/deepseek-chat",
        "OPENROUTER_API_KEY": "your-openrouter-api-key"
      }
    }
  }
}

Deebo works with any OpenAI-compatible SDK, Anthropic, Gemini, and OpenRouter.

Deebo Guide

Deebo helps your AI agent debug real software errors by launching automated investigations. Here's how to use it effectively.


1. Start a Debugging Session

When you hit a tough bug, ask your agent to delegate the task to Deebo.

What to include in your request:

  • 🔧 The error (message, stack trace, or behavior)
  • 📁 The absolute path to your Git repository
  • 💡 Any helpful context, such as:
    • What you’ve already tried
    • Relevant files or code snippets
    • How to reproduce the issue
    • The language or environment

Example instruction to your agent:

“This error is happening in /path/to/repo, possibly related to auth logic. I already checked the session token parser. Can you delegate this to Deebo?”

Your agent will start a Deebo session and give you a session ID (e.g. session-1745...). Save it.


2. Check Investigation Progress

After ~30 seconds, ask your agent to check the status of the Deebo session using that session ID.

You’ll get a session pulse, which shows:

  • Whether the investigation is in progress or completed
  • What the system is currently exploring
  • Summaries of findings, if any

3. Add Observations (Optional)

If you notice something important — or think Deebo is heading the wrong way — you can guide the investigation.

Ask your agent to pass a short observation to Deebo.

Example:

“Let Deebo know that the file size warnings might be a red herring — the issue is probably with the CI env vars.”

This may shift the direction of the investigation.

4. Cancel a Session (Optional)

If you fixed the issue or no longer need the investigation, tell your agent to cancel the Deebo session.

For AI Agents: Memory Bank Access

When asked to check debug session progress, look in: ~/.deebo/memory-bank/[project-hash]/sessions/[session-id]/logs/

The project hash is a unique identifier for each repository, and session IDs are provided when starting a debug session.

Want More?

We're piloting enterprise features that unlock unprecedented productivity gains for your team. Reach out if interested!


Watch the full work session with Cline + Deebo here (3 mins, sped up)

License

Apache License 2.0 — see LICENSE for details.

相关推荐

  • langgenius
  • Dify is an open-source LLM app development platform. Dify's intuitive interface combines AI workflow, RAG pipeline, agent capabilities, model management, observability features and more, letting you quickly go from prototype to production.

  • alibaba
  • an easy-to-use dynamic service discovery, configuration and service management platform for building AI cloud native applications.

  • microsoft
  • Python tool for converting files and office documents to Markdown.

  • mindsdb
  • AI's query engine - Platform for building AI that can answer questions over large scale federated data. - The only MCP Server you'll ever need

  • WangRongsheng
  • 🧑‍🚀 全世界最好的LLM资料总结(视频生成、Agent、辅助编程、数据处理、模型训练、模型推理、o1 模型、MCP、小语言模型、视觉语言模型) | Summary of the world's best LLM resources.

  • mem0ai
  • Memory for AI Agents; SOTA in AI Agent Memory; Announcing OpenMemory MCP - local and secure memory management.

  • mindsdb
  • AI's query engine - Platform for building AI that can answer questions over large scale federated data. - The only MCP Server you'll ever need

  • ibelick
  • The open source ChatGPT alternative for developers. Fast, multi-model AI chat. Agents + MCP coming soon.

  • udecode
  • Rich-text editor with AI, MCP, and shadcn/ui

  • strands-agents
  • A model-driven approach to building AI agents in just a few lines of code.

  • ankorstore
  • Simple, modular, and observable Go framework for backend applications.

  • golf-mcp
  • Production-Ready MCP Server Framework • Build, deploy & scale secure AI agent infrastructure • Includes Auth, Observability, Debugger, Telemetry & Runtime • Run real-world MCPs powering AI Agents

    Reviews

    5 (0)