
pêche-parole-MCP
Un serveur MCP pour la synthèse de texte-parole (TTS) pour LLMS.
1
Github Watches
0
Github Forks
1
Github Stars
fish-speech-mcp
An MCP server for text-to-speech synthesis (TTS) for LLMs.
Features
- Text-to-Speech: Convert text to speech using FishSpeech
- Saving a Voice Reference: Saving a Voice Reference
- MCP Integration: Works with Dive and other MCP-compatible LLMs
Installation
not available
With Dive Desktop
- Click "+ Add MCP Server" in Dive Desktop
- Copy and paste this configuration:
{
"mcpServers": {
"fish-speech": {
"command": "npx",
"args": [
"-y",
"@demon24ru/fish-speech-mcp"
]
}
}
}
- Click "Save" to install the MCP server
Configuration
The MCP server can be configured using environment variables:
-
MCP_FISH_SPEECH_SERVER_URL
: URL of the Optivus server (default:http://localhost:5000
)
Tool Documentation
-
text_to_speech
- Convert text to speech using FishSpeech
- Inputs:
-
text
(string, required): Text to convert to speech -
reference_id
(string, optional): Identifier of a saved voice
-
-
save_voice_reference
- Save a voice reference for future voice cloning
- Inputs:
-
reference_audio
(string, required): Path to an audio file for voice cloning -
reference_text
(string, required): Text corresponding to the audio file for voice cloning
-
Technical Details
Communication with Optivus Server
The MCP server communicates with the Optivus server using Socket.IO. The communication flow is as follows:
- The MCP server connects to the Optivus server using Socket.IO client
- Requests are sent to the server using the
message
event - Responses are received from the server using the
message
event - The MCP server handles connection, reconnection, and error scenarios automatically
Voice References
Voice references are stored in directory from optivus. Each reference is stored in a subdirectory named with a unique ID.
Usage Examples
Ask your LLM to:
"Convert this text to speech: Text to convert, Reference ID"
"Save a voice reference: Path to audio file, Text corresponding to the audio file"
Manual Start
If needed, start the server manually:
npx @demon24ru/fish-speech-mcp
Debug
If needed, start the server in debug mode:
npm run prepare
npx @modelcontextprotocol/inspector node ./lib/index.mjs -y
Requirements
- Node.js 20+
- MCP-compatible LLM service
License
MIT
Author
@demon24ru
相关推荐
I find academic articles and books for research and literature reviews.
Confidential guide on numerology and astrology, based of GG33 Public information
Advanced software engineer GPT that excels through nailing the basics.
Take an adjectivised noun, and create images making it progressively more adjective!
Embark on a thrilling diplomatic quest across a galaxy on the brink of war. Navigate complex politics and alien cultures to forge peace and avert catastrophe in this immersive interstellar adventure.
Découvrez la collection la plus complète et la plus à jour de serveurs MCP sur le marché. Ce référentiel sert de centre centralisé, offrant un vaste catalogue de serveurs MCP open-source et propriétaires, avec des fonctionnalités, des liens de documentation et des contributeurs.
Manipulation basée sur Micropython I2C de l'exposition GPIO de la série MCP, dérivée d'Adafruit_MCP230XX
L'application tout-en-un desktop et Docker AI avec chiffon intégré, agents AI, constructeur d'agent sans code, compatibilité MCP, etc.
Miroir dehttps: //github.com/bitrefill/bitrefill-mcp-server
Un puissant plugin Neovim pour gérer les serveurs MCP (Protocole de contexte modèle)
Reviews

user_4JTGKbLs
I've been using Fish-Speech-MCP developed by demon24ru and it has been fantastic! The application works seamlessly and the functionalities are intuitive. The GitHub page at https://github.com/demon24ru/fish-speech-mcp provides a clear presentation of the project. If you're into MCP applications, this one is definitely worth checking out for its efficiency and user-friendly interface.