Confidential guide on numerology and astrology, based of GG33 Public information

multimodal-mcp-client
A Multi-modal MCP client for voice powered agentic workflows
3 years
Works with Finder
3
Github Watches
23
Github Forks
162
Github Stars
Systemprompt Multimodal MCP Client
Website • Documentation • Blog • Get API Key
A modern voice-controlled AI interface powered by Google Gemini and Anthropic MCP (Model Control Protocol). Transform how you interact with AI through natural speech and multimodal inputs.
⚠️ Important Note: This open source project is currently in development and in early access. It is not currently compatible with Safari but has been tested on Chrome with Linux, Windows, and MacOS. If you have any problems, please let us know on Discord or GitHub.
If you find this project useful, please consider:
- ⭐ Starring it on GitHub
- 🔄 Sharing it with others
- 💬 Joining our Discord community
🌟 Overview
A modern Vite + TypeScript application that enables voice-controlled AI workflows through MCP (Model Control Protocol). This project revolutionizes how you interact with AI systems by combining Google Gemini's multimodal capabilities with MCP's extensible tooling system.
The Client supports both custom (user provided and configured) and Systemprompt MCP servers. Systemprompt MCP servers can be installed through the UX with a Systemprompt API key (free).
Custom MCP servers are not pre-configured and require a custom configuration file.
Create a local file mcp.config.custom.json
in the config
directory and add your MCP server configuration.
{
"mcpServers": {
"my-custom-server": {
"id": "my-custom-server",
"env": {
"xxx": "xxx"
},
"command": "node",
"args": [
"/my-custom-server/build/index.js"
]
}
}
}
🎯 Why Systemprompt MCP?
Transform your AI interactions with a powerful voice-first interface that combines:
Feature | Description |
---|---|
🗣️ Multimodal AI | Understand and process text, voice, and visual inputs naturally |
🛠️ MCP (Model Control Protocol) | Execute complex AI workflows with a robust tooling system |
🎙️ Voice-First Design | Control everything through natural speech, making AI interaction more intuitive |
Perfect for: Developers building voice-controlled AI applications and looking for innovative ways to use multimodal AI.
✨ Core Features
🎙️ Voice & Multimodal Intelligence
- Natural Voice Control: Speak naturally to control AI workflows and execute commands
- Multimodal Understanding: Process text, voice, and visual inputs simultaneously
- Real-time Voice Synthesis: Get instant audio responses from your AI interactions
🔄 AI Workflow Orchestration
- Extensible Tool System: Add custom tools and workflows through MCP
- Workflow Automation: Chain multiple AI operations with voice commands
- State Management: Robust handling of complex, multi-step AI interactions
💻 Developer Experience
- Modern Tech Stack: Built with Vite, React, TypeScript, and NextUI
- Type Safety: Full TypeScript support with comprehensive type definitions
- Hot Module Replacement: Fast development with instant feedback
- Comprehensive Testing: Built-in testing infrastructure with high coverage
🚀 Getting Started
Prerequisites
- Node.js 16.x or higher
- npm 7.x or higher
- A modern browser with Web Speech API support
Quick Start
-
Clone the repository
git clone https://github.com/Ejb503/multimodal-mcp-client.git cd multimodal-mcp-client
-
Install dependencies
npm install cd proxy npm install
-
Configure the application
# Navigate to config directory cd config # Create local configuration files cp mcp.config.example.json mcp.config.custom.json
Required API Keys:
- Google AI Studio - Gemini API key
- systemprompt.io/console - Systemprompt API key
Add keys to
.env
(see.env.example
for reference). note that theVITE_
prefix is required to share the keys with the MCP server and client. -
Start development server
npm run dev
Access the development server at
http://localhost:5173
🤝 Support & Community
Resource | Link |
---|---|
💬 Discord | Join our community |
🐛 Issues | GitHub Issues |
📚 Docs | Documentation |
📄 License
This project is licensed under the MIT License - see the LICENSE file for details.
🔮 Future Development
We're actively working on expanding the capabilities of Systemprompt MCP Client with exciting new features and extensions. Stay tuned for updates!
相关推荐
Converts Figma frames into front-end code for various mobile frameworks.
Advanced software engineer GPT that excels through nailing the basics.
Take an adjectivised noun, and create images making it progressively more adjective!
Siri Shortcut Finder – your go-to place for discovering amazing Siri Shortcuts with ease
I find academic articles and books for research and literature reviews.
Bridge between Ollama and MCP servers, enabling local LLMs to use Model Context Protocol tools
🧑🚀 全世界最好的LLM资料总结(Agent框架、辅助编程、数据处理、模型训练、模型推理、o1 模型、MCP、小语言模型、视觉语言模型) | Summary of the world's best LLM resources.
The all-in-one Desktop & Docker AI application with built-in RAG, AI agents, No-code agent builder, MCP compatibility, and more.
Awesome MCP Servers - A curated list of Model Context Protocol servers
Enable AI assistant clients like Cursor, Windsurf and Claude Desktop to control Unreal Engine through natural language using the Model Context Protocol (MCP).
🔥 1Panel provides an intuitive web interface and MCP Server to manage websites, files, containers, databases, and LLMs on a Linux server.
Query MCP enables end-to-end management of Supabase via chat interface: read & write query executions, management API support, automatic migration versioning, access to logs and much more.
Reviews

user_AjuHgrgp
The multimodal-mcp-client by Ejb503 is an impressive application with versatile features. Its seamless integration and user-friendly interface make it incredibly convenient to handle multiple tasks efficiently. As a devoted user, I appreciate the well-designed structure and the consistent updates. Highly recommend checking it out at https://github.com/Ejb503/multimodal-mcp-client!