Cover image
Try Now
2025-04-14

LLM驱动的自主编码助手。还提供MCP模式。

3 years

Works with Finder

2

Github Watches

6

Github Forks

28

Github Stars

Code Assistant

CI

A CLI tool built in Rust for assisting with code-related tasks.

Features

  • Autonomous Exploration: The agent can intelligently explore codebases and build up working memory of the project structure.
  • Reading/Writing Files: The agent can read file contents and make changes to files as needed.
  • Working Memory Management: Efficient handling of file contents with the ability to load and unload files from memory.
  • File Summarization: Capability to create and store file summaries for quick reference and better understanding of the codebase.
  • Interactive Communication: Ability to ask users questions and get responses for better decision-making.
  • MCP Server Mode: Can run as a Model Context Protocol server, providing tools and resources to LLMs running in an MCP client.

Installation

Ensure you have Rust installed on your system. Then:

# Clone the repository
git clone https://github.com/stippi/code-assistant

# Navigate to the project directory
cd code-assistant

# Build the project
cargo build --release

# The binary will be available in target/release/code-assistant

Configuration in Claude Desktop

The code-assistant implements the Model Context Protocol by Anthropic. This means it can be added as a plugin to MCP client applications such as Claude Desktop.

Configure Your Projects

Create a file ~/.config/code-assistant/projects.json. This file adds available projects in MCP server mode (list_projects and file operation tools). It has the following structure:

{
  "code-assistant": {
    "path": "/Users/<username>/workspace/code-assistant"
  },
  "asteroids": {
    "path": "/Users/<username>/workspace/asteroids"
  },
  "zed": {
    "path": "Users/<username>/workspace/zed"
  }
}

Notes:

  • The absolute paths are not provided by the tool, to avoid leaking such information to LLM cloud providers.
  • This file can be edited without restarting Claude Desktop, respectively the MCP server.

Configure MCP Servers

  • Open the Claude Desktop application settings (Claude -> Settings)
  • Switch to the Developer tab.
  • Click the Edit Config button.

A Finder window opens highlighting the file claude_desktop_config.json. Open that file in your favorite text editor.

An example configuration is given below:

{
  "mcpServers": {
    "code-assistant": {
      "command": "/Users/<username>/workspace/code-assistant/target/release/code-assistant",
      "args": [
        "server"
      ],
      "env": {
        "PERPLEXITY_API_KEY": "pplx-..." // optional, enables perplexity_ask tool
      }
    }
  }
}

Usage

Code Assistant can run in two modes:

Agent Mode (Default)

code-assistant --task <TASK> [OPTIONS]

Available options:

  • --path <PATH>: Path to the code directory to analyze (default: current directory)
  • -t, --task <TASK>: Task to perform on the codebase (required unless --continue-task or --ui is used)
  • --ui: Start with GUI interface
  • --continue-task: Continue from previous state
  • -v, --verbose: Enable verbose logging
  • -p, --provider <PROVIDER>: LLM provider to use [ai-core, anthropic, open-ai, ollama, vertex, openrouter] (default: anthropic)
  • -m, --model <MODEL>: Model name to use (defaults: anthropic="claude-3-7-sonnet-20250219", open-ai="gpt-4o", vertex="gemini-2.5-pro-exp-03-25", openrouter="anthropic/claude-3-7-sonnet", ollama=required)
  • --base-url <URL>: API base URL for the LLM provider
  • --tools-type <TOOLS_TYPE>: Type of tool declaration [native, xml] (default: xml) native = tools via LLM provider API, xml = custom system message
  • --num-ctx <NUM>: Context window size in tokens (default: 8192, only relevant for Ollama)
  • --agent-mode <MODE>: Agent mode to use [working_memory, message_history] (default: message_history)
  • --record <PATH>: Record API responses to a file for testing (currently supported for Anthropic and AI Core providers)
  • --playback <PATH>: Play back a recorded session from a file
  • --fast-playback: Fast playback mode - ignore chunk timing when playing recordings

Environment variables:

  • ANTHROPIC_API_KEY: Required when using the Anthropic provider
  • OPENAI_API_KEY: Required when using the OpenAI provider
  • GOOGLE_API_KEY: Required when using the Vertex provider
  • OPENROUTER_API_KEY: Required when using the OpenRouter provider
  • PERPLEXITY_API_KEY: Required to use the Perplexity search API tools
  • Note: AI Core authentication is configured on the command line (the tool will prompt for the parameters and store them in your default keychain)

Examples:

# Analyze code in current directory using Anthropic's Claude
code-assistant --task "Explain the purpose of this codebase"

# Use OpenAI to analyze a specific directory with verbose logging
code-assistant -p open-ai --path ./my-project -t "List all API endpoints" -v

# Use Google's Vertex AI with a specific model
code-assistant -p vertex --model gemini-1.5-flash -t "Analyze code complexity"

# Use Ollama with a specific model (model is required for Ollama)
code-assistant -p ollama -m codellama --task "Find all TODO comments in the codebase"

# Use AI Core provider
code-assistant -p ai-core --task "Document the public API"

# Use with working memory agent mode instead of message history mode
code-assistant --task "Find performance bottlenecks" --agent-mode working_memory

# Continue a previously interrupted task
code-assistant --continue-task

# Start with GUI interface
code-assistant --ui

# Record a session for later playback
code-assistant --task "Optimize database queries" --record ./recordings/db-optimization.json

# Play back a recorded session with fast-forward (no timing delays)
code-assistant --playback ./recordings/db-optimization.json --fast-playback

Server Mode

Runs as a Model Context Protocol server:

code-assistant server [OPTIONS]

Available options:

  • -v, --verbose: Enable verbose logging

Roadmap

This section is not really a roadmap, as the items are in no particular order. Below are some topics that are likely the next focus.

  • Security: Ideally, the execution for all tools would run in some sort of sandbox that restricts access to the files in the project tracked by git. Currently, the tools reject absolute paths, but do not check whether the relative paths point outside the project or try to access git-ignored files. The execute_command tool runs a shell with the provided command line, which at the moment is completely unchecked.
  • UI improvements: The text input for the user message is horrible. There is currently no markdown support or syntax highlighting for code blocks. There is a project longbridge/gpui-component with a component library building on top of Zed's GPUI crate. It contains a lot of useful components and the license is more permissive than Zed's own components.
  • Agent improvements: The working memory mode is not what LLMs are trained for and thus it doesn't work so well. Too many tokens are generated before calling the next tool. In the chat message history mode on the other hand, the total input token count can quickly grow out of hand. Especially when the messages contain multiple redundant copies of the exact same resources. I would like to explore ways to automatically prune the messages to avoid that.
  • Fuzzy matching search blocks: Investigate the benefit of fuzzy matching search blocks. Currently, files are normalized (always \n line endings, no trailing white space). This increases the success rate of matching search blocks quite a bit, but certain ways of fuzzy matching might increase the success even more. Failed matches introduce quite a bit of inefficiency, since they almost always trigger the LLM to re-read a file. Even when the error output of the replace_in_file tool includes the complete file and tells the LLM not to re-read the file.
  • Compare approaches: Create or re-use a benchmark to compare the approaches in the agent mode: Working Memory mode versus Message History. Native tools versus custom XML tool parsing. Compare percentage of tasks completed and cost.

Contributing

Contributions are welcome! Please feel free to submit a Pull Request.

相关推荐

  • NiKole Maxwell
  • I craft unique cereal names, stories, and ridiculously cute Cereal Baby images.

  • Joshua Armstrong
  • Confidential guide on numerology and astrology, based of GG33 Public information

  • https://suefel.com
  • Latest advice and best practices for custom GPT development.

  • Callycode Limited
  • A geek-themed horoscope generator blending Bitcoin prices, tech jargon, and astrological whimsy.

  • Alexandru Strujac
  • Efficient thumbnail creator for YouTube videos

  • Emmet Halm
  • Converts Figma frames into front-end code for various mobile frameworks.

  • Beniyam Berhanu
  • Therapist adept at identifying core issues and offering practical advice with images.

  • Khalid kalib
  • Write professional emails

  • https://tovuti.be
  • Oede knorrepot die vasthoudt an de goeie ouwe tied van 't boerenleven

  • ANGEL LEON
  • A world class elite tech co-founder entrepreneur, expert in software development, entrepreneurship, marketing, coaching style leadership and aligned with ambition for excellence, global market penetration and worldy perspectives.

  • INFOLAB OPERATIONS 2
  • A medical specialist offering assistance grounded in clinical guidelines. Disclaimer: This is intended for research and is NOT safe for clinical use!

  • apappascs
  • 发现市场上最全面,最新的MCP服务器集合。该存储库充当集中式枢纽,提供了广泛的开源和专有MCP服务器目录,并提供功能,文档链接和贡献者。

  • ShrimpingIt
  • MCP系列GPIO Expander的基于Micropython I2C的操作,源自ADAFRUIT_MCP230XX

  • OffchainLabs
  • 进行以太坊的实施

  • huahuayu
  • 统一的API网关,用于将多个Etherscan样区块链Explorer API与对AI助手的模型上下文协议(MCP)支持。

  • deemkeen
  • 用电源组合控制您的MBOT2:MQTT+MCP+LLM

  • jae-jae
  • MCP服务器使用剧作《无头浏览器》获取网页内容。

  • HiveNexus
  • 一个适用于中小型团队的AI聊天机器人,支持DeepSeek,Open AI,Claude和Gemini等车型。 专为中小团队设计的ai聊天应用,支持deepSeek,打开ai,claude,双子座等模型。

    Reviews

    5 (1)
    Avatar
    user_cgliya1S
    2025-04-18

    I recently started using Code-Assistant by stippi, and it has significantly improved my coding workflow. The intelligent suggestions and auto-completion features are spot-on, making writing and debugging code faster and more efficient. Highly recommend checking it out at https://github.com/stippi/code-assistant. It's a game-changer for any developer looking to boost productivity!