I craft unique cereal names, stories, and ridiculously cute Cereal Baby images.

快速旋转MCP服务器
基于更快的耳语的高性能语音识别MCP服务器,提供有效的音频转录功能。
3 years
Works with Finder
1
Github Watches
1
Github Forks
2
Github Stars
Whisper Speech Recognition MCP Server
中文文档
A high-performance speech recognition MCP server based on Faster Whisper, providing efficient audio transcription capabilities.
Features
- Integrated with Faster Whisper for efficient speech recognition
- Batch processing acceleration for improved transcription speed
- Automatic CUDA acceleration (if available)
- Support for multiple model sizes (tiny to large-v3)
- Output formats include VTT subtitles, SRT, and JSON
- Support for batch transcription of audio files in a folder
- Model instance caching to avoid repeated loading
- Dynamic batch size adjustment based on GPU memory
Installation
Dependencies
- Python 3.10+
- faster-whisper>=0.9.0
- torch==2.6.0+cu126
- torchaudio==2.6.0+cu126
- mcp[cli]>=1.2.0
Installation Steps
- Clone or download this repository
- Create and activate a virtual environment (recommended)
- Install dependencies:
pip install -r requirements.txt
PyTorch Installation Guide
Install the appropriate version of PyTorch based on your CUDA version:
-
CUDA 12.6:
pip install torch==2.6.0 torchvision==0.21.0 torchaudio==2.6.0 --index-url https://download.pytorch.org/whl/cu126
-
CUDA 12.1:
pip install torch==2.5.1 torchvision==0.20.1 torchaudio==2.5.1 --index-url https://download.pytorch.org/whl/cu121
-
CPU version:
pip install torch==2.6.0 torchvision==0.21.0 torchaudio==2.6.0 --index-url https://download.pytorch.org/whl/cpu
You can check your CUDA version with nvcc --version
or nvidia-smi
.
Usage
Starting the Server
On Windows, simply run start_server.bat
.
On other platforms, run:
python whisper_server.py
Configuring Claude Desktop
-
Open the Claude Desktop configuration file:
- Windows:
%APPDATA%\Claude\claude_desktop_config.json
- macOS:
~/Library/Application Support/Claude/claude_desktop_config.json
- Windows:
-
Add the Whisper server configuration:
{
"mcpServers": {
"whisper": {
"command": "python",
"args": ["D:/path/to/whisper_server.py"],
"env": {}
}
}
}
- Restart Claude Desktop
Available Tools
The server provides the following tools:
- get_model_info - Get information about available Whisper models
- transcribe - Transcribe a single audio file
- batch_transcribe - Batch transcribe audio files in a folder
Performance Optimization Tips
- Using CUDA acceleration significantly improves transcription speed
- Batch processing mode is more efficient for large numbers of short audio files
- Batch size is automatically adjusted based on GPU memory size
- Using VAD (Voice Activity Detection) filtering improves accuracy for long audio
- Specifying the correct language can improve transcription quality
Local Testing Methods
- Use MCP Inspector for quick testing:
mcp dev whisper_server.py
-
Use Claude Desktop for integration testing
-
Use command line direct invocation (requires mcp[cli]):
mcp run whisper_server.py
Error Handling
The server implements the following error handling mechanisms:
- Audio file existence check
- Model loading failure handling
- Transcription process exception catching
- GPU memory management
- Batch processing parameter adaptive adjustment
Project Structure
-
whisper_server.py
: Main server code -
model_manager.py
: Whisper model loading and caching -
audio_processor.py
: Audio file validation and preprocessing -
formatters.py
: Output formatting (VTT, SRT, JSON) -
transcriber.py
: Core transcription logic -
start_server.bat
: Windows startup script
License
MIT
Acknowledgements
This project was developed with the assistance of these amazing AI tools and models:
- GitHub Copilot - AI pair programmer
- Trae - Agentic AI coding assistant
- Cline - AI-powered terminal
- DeepSeek - Advanced AI model
- Claude-3.7-Sonnet - Anthropic's powerful AI assistant
- Gemini-2.0-Flash - Google's multimodal AI model
- VS Code - Powerful code editor
- Whisper - OpenAI's speech recognition model
- Faster Whisper - Optimized Whisper implementation
Special thanks to these incredible tools and the teams behind them.
相关推荐
Confidential guide on numerology and astrology, based of GG33 Public information
A geek-themed horoscope generator blending Bitcoin prices, tech jargon, and astrological whimsy.
Therapist adept at identifying core issues and offering practical advice with images.
A medical specialist offering assistance grounded in clinical guidelines. Disclaimer: This is intended for research and is NOT safe for clinical use!
A GPT designed to provide everyday financial advice and tools to Canadians, primarily inspired by the subreddit Personal Finance Canada.
一个适用于中小型团队的AI聊天机器人,支持DeepSeek,Open AI,Claude和Gemini等车型。 专为中小团队设计的ai聊天应用,支持deepSeek,打开ai,claude,双子座等模型。
Reviews

user_7PpBm25g
I've been using the Fast-Whisper-MCP-Server by BigUncle, and it has completely revolutionized my workflow. The performance is incredible, and the server handles multiple clients effortlessly. The documentation is clear and comprehensive, making setup a breeze. Huge kudos to the author for creating such an efficient and user-friendly product! Highly recommended for anyone in need of a robust MCP server solution.