Cover image
Try Now
2025-04-03

Ein MCP-Server für die Text-zu-Sprach-Synthese (TTS) für LLMs.

3 years

Works with Finder

1

Github Watches

0

Github Forks

1

Github Stars

fish-speech-mcp

An MCP server for text-to-speech synthesis (TTS) for LLMs.

Features

  • Text-to-Speech: Convert text to speech using FishSpeech
  • Saving a Voice Reference: Saving a Voice Reference
  • MCP Integration: Works with Dive and other MCP-compatible LLMs

Installation

not available

With Dive Desktop

  1. Click "+ Add MCP Server" in Dive Desktop
  2. Copy and paste this configuration:
{
  "mcpServers": {
    "fish-speech": {
      "command": "npx",
      "args": [
        "-y",
        "@demon24ru/fish-speech-mcp"
      ]
    }
  }
}
  1. Click "Save" to install the MCP server

Configuration

The MCP server can be configured using environment variables:

  • MCP_FISH_SPEECH_SERVER_URL: URL of the Optivus server (default: http://localhost:5000)

Tool Documentation

  • text_to_speech

    • Convert text to speech using FishSpeech
    • Inputs:
      • text (string, required): Text to convert to speech
      • reference_id (string, optional): Identifier of a saved voice
  • save_voice_reference

    • Save a voice reference for future voice cloning
    • Inputs:
      • reference_audio (string, required): Path to an audio file for voice cloning
      • reference_text (string, required): Text corresponding to the audio file for voice cloning

Technical Details

Communication with Optivus Server

The MCP server communicates with the Optivus server using Socket.IO. The communication flow is as follows:

  1. The MCP server connects to the Optivus server using Socket.IO client
  2. Requests are sent to the server using the message event
  3. Responses are received from the server using the message event
  4. The MCP server handles connection, reconnection, and error scenarios automatically

Voice References

Voice references are stored in directory from optivus. Each reference is stored in a subdirectory named with a unique ID.

Usage Examples

Ask your LLM to:

"Convert this text to speech: Text to convert, Reference ID"
"Save a voice reference: Path to audio file, Text corresponding to the audio file"

Manual Start

If needed, start the server manually:

npx @demon24ru/fish-speech-mcp

Debug

If needed, start the server in debug mode:

npm run prepare
npx @modelcontextprotocol/inspector node ./lib/index.mjs -y

Requirements

  • Node.js 20+
  • MCP-compatible LLM service

License

MIT

Author

@demon24ru

相关推荐

  • Joshua Armstrong
  • Confidential guide on numerology and astrology, based of GG33 Public information

  • https://suefel.com
  • Latest advice and best practices for custom GPT development.

  • Emmet Halm
  • Converts Figma frames into front-end code for various mobile frameworks.

  • Elijah Ng Shi Yi
  • Advanced software engineer GPT that excels through nailing the basics.

  • https://maiplestudio.com
  • Find Exhibitors, Speakers and more

  • lumpenspace
  • Take an adjectivised noun, and create images making it progressively more adjective!

  • https://appia.in
  • Siri Shortcut Finder – your go-to place for discovering amazing Siri Shortcuts with ease

  • Carlos Ferrin
  • Encuentra películas y series en plataformas de streaming.

  • Yusuf Emre Yeşilyurt
  • I find academic articles and books for research and literature reviews.

  • tomoyoshi hirata
  • Sony α7IIIマニュアルアシスタント

  • apappascs
  • Entdecken Sie die umfassendste und aktuellste Sammlung von MCP-Servern auf dem Markt. Dieses Repository dient als zentraler Hub und bietet einen umfangreichen Katalog von Open-Source- und Proprietary MCP-Servern mit Funktionen, Dokumentationslinks und Mitwirkenden.

  • jae-jae
  • MCP -Server für den Fetch -Webseiteninhalt mit dem Headless -Browser von Dramatikern.

  • HiveNexus
  • Ein KI-Chat-Bot für kleine und mittelgroße Teams, die Modelle wie Deepseek, Open AI, Claude und Gemini unterstützt. 专为中小团队设计的 ai 聊天应用 , 支持 Deepseek 、 Open ai 、 claude 、 Gemini 等模型。

  • ravitemer
  • Ein leistungsstarkes Neovim -Plugin für die Verwaltung von MCP -Servern (Modellkontextprotokoll)

  • patruff
  • Brücke zwischen Ollama und MCP -Servern und ermöglicht es lokalen LLMs, Modellkontextprotokoll -Tools zu verwenden

    Reviews

    5 (1)
    Avatar
    user_4JTGKbLs
    2025-04-16

    I've been using Fish-Speech-MCP developed by demon24ru and it has been fantastic! The application works seamlessly and the functionalities are intuitive. The GitHub page at https://github.com/demon24ru/fish-speech-mcp provides a clear presentation of the project. If you're into MCP applications, this one is definitely worth checking out for its efficiency and user-friendly interface.