I craft unique cereal names, stories, and ridiculously cute Cereal Baby images.

Fast-Whisper-MCP-Server
A high-performance speech recognition MCP server based on Faster Whisper, providing efficient audio transcription capabilities.
3 years
Works with Finder
1
Github Watches
1
Github Forks
2
Github Stars
Whisper Speech Recognition MCP Server
中文文档
A high-performance speech recognition MCP server based on Faster Whisper, providing efficient audio transcription capabilities.
Features
- Integrated with Faster Whisper for efficient speech recognition
- Batch processing acceleration for improved transcription speed
- Automatic CUDA acceleration (if available)
- Support for multiple model sizes (tiny to large-v3)
- Output formats include VTT subtitles, SRT, and JSON
- Support for batch transcription of audio files in a folder
- Model instance caching to avoid repeated loading
- Dynamic batch size adjustment based on GPU memory
Installation
Dependencies
- Python 3.10+
- faster-whisper>=0.9.0
- torch==2.6.0+cu126
- torchaudio==2.6.0+cu126
- mcp[cli]>=1.2.0
Installation Steps
- Clone or download this repository
- Create and activate a virtual environment (recommended)
- Install dependencies:
pip install -r requirements.txt
PyTorch Installation Guide
Install the appropriate version of PyTorch based on your CUDA version:
-
CUDA 12.6:
pip install torch==2.6.0 torchvision==0.21.0 torchaudio==2.6.0 --index-url https://download.pytorch.org/whl/cu126
-
CUDA 12.1:
pip install torch==2.5.1 torchvision==0.20.1 torchaudio==2.5.1 --index-url https://download.pytorch.org/whl/cu121
-
CPU version:
pip install torch==2.6.0 torchvision==0.21.0 torchaudio==2.6.0 --index-url https://download.pytorch.org/whl/cpu
You can check your CUDA version with nvcc --version
or nvidia-smi
.
Usage
Starting the Server
On Windows, simply run start_server.bat
.
On other platforms, run:
python whisper_server.py
Configuring Claude Desktop
-
Open the Claude Desktop configuration file:
- Windows:
%APPDATA%\Claude\claude_desktop_config.json
- macOS:
~/Library/Application Support/Claude/claude_desktop_config.json
- Windows:
-
Add the Whisper server configuration:
{
"mcpServers": {
"whisper": {
"command": "python",
"args": ["D:/path/to/whisper_server.py"],
"env": {}
}
}
}
- Restart Claude Desktop
Available Tools
The server provides the following tools:
- get_model_info - Get information about available Whisper models
- transcribe - Transcribe a single audio file
- batch_transcribe - Batch transcribe audio files in a folder
Performance Optimization Tips
- Using CUDA acceleration significantly improves transcription speed
- Batch processing mode is more efficient for large numbers of short audio files
- Batch size is automatically adjusted based on GPU memory size
- Using VAD (Voice Activity Detection) filtering improves accuracy for long audio
- Specifying the correct language can improve transcription quality
Local Testing Methods
- Use MCP Inspector for quick testing:
mcp dev whisper_server.py
-
Use Claude Desktop for integration testing
-
Use command line direct invocation (requires mcp[cli]):
mcp run whisper_server.py
Error Handling
The server implements the following error handling mechanisms:
- Audio file existence check
- Model loading failure handling
- Transcription process exception catching
- GPU memory management
- Batch processing parameter adaptive adjustment
Project Structure
-
whisper_server.py
: Main server code -
model_manager.py
: Whisper model loading and caching -
audio_processor.py
: Audio file validation and preprocessing -
formatters.py
: Output formatting (VTT, SRT, JSON) -
transcriber.py
: Core transcription logic -
start_server.bat
: Windows startup script
License
MIT
Acknowledgements
This project was developed with the assistance of these amazing AI tools and models:
- GitHub Copilot - AI pair programmer
- Trae - Agentic AI coding assistant
- Cline - AI-powered terminal
- DeepSeek - Advanced AI model
- Claude-3.7-Sonnet - Anthropic's powerful AI assistant
- Gemini-2.0-Flash - Google's multimodal AI model
- VS Code - Powerful code editor
- Whisper - OpenAI's speech recognition model
- Faster Whisper - Optimized Whisper implementation
Special thanks to these incredible tools and the teams behind them.
相关推荐
Evaluator for marketplace product descriptions, checks for relevancy and keyword stuffing.
Confidential guide on numerology and astrology, based of GG33 Public information
A geek-themed horoscope generator blending Bitcoin prices, tech jargon, and astrological whimsy.
Converts Figma frames into front-end code for various mobile frameworks.
Advanced software engineer GPT that excels through nailing the basics.
Therapist adept at identifying core issues and offering practical advice with images.
Discover the most comprehensive and up-to-date collection of MCP servers in the market. This repository serves as a centralized hub, offering an extensive catalog of open-source and proprietary MCP servers, complete with features, documentation links, and contributors.
Micropython I2C-based manipulation of the MCP series GPIO expander, derived from Adafruit_MCP230xx
A unified API gateway for integrating multiple etherscan-like blockchain explorer APIs with Model Context Protocol (MCP) support for AI assistants.
Mirror ofhttps://github.com/agentience/practices_mcp_server
Mirror ofhttps://github.com/bitrefill/bitrefill-mcp-server
Reviews

user_7PpBm25g
I've been using the Fast-Whisper-MCP-Server by BigUncle, and it has completely revolutionized my workflow. The performance is incredible, and the server handles multiple clients effortlessly. The documentation is clear and comprehensive, making setup a breeze. Huge kudos to the author for creating such an efficient and user-friendly product! Highly recommended for anyone in need of a robust MCP server solution.