Cover image
Try Now
2025-03-22

基于更快的耳语的高性能语音识别MCP服务器,提供有效的音频转录功能。

3 years

Works with Finder

1

Github Watches

1

Github Forks

2

Github Stars

Whisper Speech Recognition MCP Server


中文文档

A high-performance speech recognition MCP server based on Faster Whisper, providing efficient audio transcription capabilities.

Features

  • Integrated with Faster Whisper for efficient speech recognition
  • Batch processing acceleration for improved transcription speed
  • Automatic CUDA acceleration (if available)
  • Support for multiple model sizes (tiny to large-v3)
  • Output formats include VTT subtitles, SRT, and JSON
  • Support for batch transcription of audio files in a folder
  • Model instance caching to avoid repeated loading
  • Dynamic batch size adjustment based on GPU memory

Installation

Dependencies

  • Python 3.10+
  • faster-whisper>=0.9.0
  • torch==2.6.0+cu126
  • torchaudio==2.6.0+cu126
  • mcp[cli]>=1.2.0

Installation Steps

  1. Clone or download this repository
  2. Create and activate a virtual environment (recommended)
  3. Install dependencies:
pip install -r requirements.txt

PyTorch Installation Guide

Install the appropriate version of PyTorch based on your CUDA version:

  • CUDA 12.6:

    pip install torch==2.6.0 torchvision==0.21.0 torchaudio==2.6.0 --index-url https://download.pytorch.org/whl/cu126
    
  • CUDA 12.1:

    pip install torch==2.5.1 torchvision==0.20.1 torchaudio==2.5.1 --index-url https://download.pytorch.org/whl/cu121
    
  • CPU version:

    pip install torch==2.6.0 torchvision==0.21.0 torchaudio==2.6.0 --index-url https://download.pytorch.org/whl/cpu
    

You can check your CUDA version with nvcc --version or nvidia-smi.

Usage

Starting the Server

On Windows, simply run start_server.bat.

On other platforms, run:

python whisper_server.py

Configuring Claude Desktop

  1. Open the Claude Desktop configuration file:

    • Windows: %APPDATA%\Claude\claude_desktop_config.json
    • macOS: ~/Library/Application Support/Claude/claude_desktop_config.json
  2. Add the Whisper server configuration:

{
  "mcpServers": {
    "whisper": {
      "command": "python",
      "args": ["D:/path/to/whisper_server.py"],
      "env": {}
    }
  }
}
  1. Restart Claude Desktop

Available Tools

The server provides the following tools:

  1. get_model_info - Get information about available Whisper models
  2. transcribe - Transcribe a single audio file
  3. batch_transcribe - Batch transcribe audio files in a folder

Performance Optimization Tips

  • Using CUDA acceleration significantly improves transcription speed
  • Batch processing mode is more efficient for large numbers of short audio files
  • Batch size is automatically adjusted based on GPU memory size
  • Using VAD (Voice Activity Detection) filtering improves accuracy for long audio
  • Specifying the correct language can improve transcription quality

Local Testing Methods

  1. Use MCP Inspector for quick testing:
mcp dev whisper_server.py
  1. Use Claude Desktop for integration testing

  2. Use command line direct invocation (requires mcp[cli]):

mcp run whisper_server.py

Error Handling

The server implements the following error handling mechanisms:

  • Audio file existence check
  • Model loading failure handling
  • Transcription process exception catching
  • GPU memory management
  • Batch processing parameter adaptive adjustment

Project Structure

  • whisper_server.py: Main server code
  • model_manager.py: Whisper model loading and caching
  • audio_processor.py: Audio file validation and preprocessing
  • formatters.py: Output formatting (VTT, SRT, JSON)
  • transcriber.py: Core transcription logic
  • start_server.bat: Windows startup script

License

MIT

Acknowledgements

This project was developed with the assistance of these amazing AI tools and models:

Special thanks to these incredible tools and the teams behind them.

相关推荐

  • NiKole Maxwell
  • I craft unique cereal names, stories, and ridiculously cute Cereal Baby images.

  • Joshua Armstrong
  • Confidential guide on numerology and astrology, based of GG33 Public information

  • https://suefel.com
  • Latest advice and best practices for custom GPT development.

  • Callycode Limited
  • A geek-themed horoscope generator blending Bitcoin prices, tech jargon, and astrological whimsy.

  • Alexandru Strujac
  • Efficient thumbnail creator for YouTube videos

  • Emmet Halm
  • Converts Figma frames into front-end code for various mobile frameworks.

  • Beniyam Berhanu
  • Therapist adept at identifying core issues and offering practical advice with images.

  • INFOLAB OPERATIONS 2
  • A medical specialist offering assistance grounded in clinical guidelines. Disclaimer: This is intended for research and is NOT safe for clinical use!

  • Elijah Ng Shi Yi
  • Advanced software engineer GPT that excels through nailing the basics.

  • Shea galley
  • A GPT designed to provide everyday financial advice and tools to Canadians, primarily inspired by the subreddit Personal Finance Canada.

  • apappascs
  • 发现市场上最全面,最新的MCP服务器集合。该存储库充当集中式枢纽,提供了广泛的开源和专有MCP服务器目录,并提供功能,文档链接和贡献者。

  • ShrimpingIt
  • MCP系列GPIO Expander的基于Micropython I2C的操作,源自ADAFRUIT_MCP230XX

  • huahuayu
  • 统一的API网关,用于将多个Etherscan样区块链Explorer API与对AI助手的模型上下文协议(MCP)支持。

  • deemkeen
  • 用电源组合控制您的MBOT2:MQTT+MCP+LLM

  • jae-jae
  • MCP服务器使用剧作《无头浏览器》获取网页内容。

  • zhaoyunxing92
  • MCP(消息连接器协议)服务

  • HiveNexus
  • 一个适用于中小型团队的AI聊天机器人,支持DeepSeek,Open AI,Claude和Gemini等车型。 专为中小团队设计的ai聊天应用,支持deepSeek,打开ai,claude,双子座等模型。

    Reviews

    1 (1)
    Avatar
    user_7PpBm25g
    2025-04-17

    I've been using the Fast-Whisper-MCP-Server by BigUncle, and it has completely revolutionized my workflow. The performance is incredible, and the server handles multiple clients effortlessly. The documentation is clear and comprehensive, making setup a breeze. Huge kudos to the author for creating such an efficient and user-friendly product! Highly recommended for anyone in need of a robust MCP server solution.