MCP cover image
See in Github
2025-04-03

Un serveur MCP pour la synthèse de texte-parole (TTS) pour LLMS.

1

Github Watches

0

Github Forks

1

Github Stars

fish-speech-mcp

An MCP server for text-to-speech synthesis (TTS) for LLMs.

Features

  • Text-to-Speech: Convert text to speech using FishSpeech
  • Saving a Voice Reference: Saving a Voice Reference
  • MCP Integration: Works with Dive and other MCP-compatible LLMs

Installation

not available

With Dive Desktop

  1. Click "+ Add MCP Server" in Dive Desktop
  2. Copy and paste this configuration:
{
  "mcpServers": {
    "fish-speech": {
      "command": "npx",
      "args": [
        "-y",
        "@demon24ru/fish-speech-mcp"
      ]
    }
  }
}
  1. Click "Save" to install the MCP server

Configuration

The MCP server can be configured using environment variables:

  • MCP_FISH_SPEECH_SERVER_URL: URL of the Optivus server (default: http://localhost:5000)

Tool Documentation

  • text_to_speech

    • Convert text to speech using FishSpeech
    • Inputs:
      • text (string, required): Text to convert to speech
      • reference_id (string, optional): Identifier of a saved voice
  • save_voice_reference

    • Save a voice reference for future voice cloning
    • Inputs:
      • reference_audio (string, required): Path to an audio file for voice cloning
      • reference_text (string, required): Text corresponding to the audio file for voice cloning

Technical Details

Communication with Optivus Server

The MCP server communicates with the Optivus server using Socket.IO. The communication flow is as follows:

  1. The MCP server connects to the Optivus server using Socket.IO client
  2. Requests are sent to the server using the message event
  3. Responses are received from the server using the message event
  4. The MCP server handles connection, reconnection, and error scenarios automatically

Voice References

Voice references are stored in directory from optivus. Each reference is stored in a subdirectory named with a unique ID.

Usage Examples

Ask your LLM to:

"Convert this text to speech: Text to convert, Reference ID"
"Save a voice reference: Path to audio file, Text corresponding to the audio file"

Manual Start

If needed, start the server manually:

npx @demon24ru/fish-speech-mcp

Debug

If needed, start the server in debug mode:

npm run prepare
npx @modelcontextprotocol/inspector node ./lib/index.mjs -y

Requirements

  • Node.js 20+
  • MCP-compatible LLM service

License

MIT

Author

@demon24ru

相关推荐

  • https://suefel.com
  • Latest advice and best practices for custom GPT development.

  • Yusuf Emre Yeşilyurt
  • I find academic articles and books for research and literature reviews.

  • https://maiplestudio.com
  • Find Exhibitors, Speakers and more

  • Carlos Ferrin
  • Encuentra películas y series en plataformas de streaming.

  • Joshua Armstrong
  • Confidential guide on numerology and astrology, based of GG33 Public information

  • Contraband Interactive
  • Emulating Dr. Jordan B. Peterson's style in providing life advice and insights.

  • rustassistant.com
  • Your go-to expert in the Rust ecosystem, specializing in precise code interpretation, up-to-date crate version checking, and in-depth source code analysis. I offer accurate, context-aware insights for all your Rust programming questions.

  • Elijah Ng Shi Yi
  • Advanced software engineer GPT that excels through nailing the basics.

  • Emmet Halm
  • Converts Figma frames into front-end code for various mobile frameworks.

  • lumpenspace
  • Take an adjectivised noun, and create images making it progressively more adjective!

  • apappascs
  • Découvrez la collection la plus complète et la plus à jour de serveurs MCP sur le marché. Ce référentiel sert de centre centralisé, offrant un vaste catalogue de serveurs MCP open-source et propriétaires, avec des fonctionnalités, des liens de documentation et des contributeurs.

  • Mintplex-Labs
  • L'application tout-en-un desktop et Docker AI avec chiffon intégré, agents AI, constructeur d'agent sans code, compatibilité MCP, etc.

  • modelcontextprotocol
  • Serveurs de protocole de contexte modèle

  • ShrimpingIt
  • Manipulation basée sur Micropython I2C de l'exposition GPIO de la série MCP, dérivée d'Adafruit_MCP230XX

  • n8n-io
  • Plateforme d'automatisation de workflow à code équitable avec des capacités d'IA natives. Combinez le bâtiment visuel avec du code personnalisé, de l'auto-hôte ou du cloud, 400+ intégrations.

  • WangRongsheng
  • 🧑‍🚀 全世界最好的 LLM 资料总结 (数据处理、模型训练、模型部署、 O1 模型、 MCP 、小语言模型、视觉语言模型) | Résumé des meilleures ressources LLM du monde.

  • metorial
  • Versions conteneurisées de centaines de serveurs MCP 📡 🧠 🧠

  • open-webui
  • Interface AI conviviale (prend en charge Olllama, Openai API, ...)

    Reviews

    5 (1)
    Avatar
    user_4JTGKbLs
    2025-04-16

    I've been using Fish-Speech-MCP developed by demon24ru and it has been fantastic! The application works seamlessly and the functionalities are intuitive. The GitHub page at https://github.com/demon24ru/fish-speech-mcp provides a clear presentation of the project. If you're into MCP applications, this one is definitely worth checking out for its efficiency and user-friendly interface.