Cover image

Cliente de protocolo de contexto modelo (MCP) para los actores de Apify

3 years

Works with Finder

2

Github Watches

6

Github Forks

45

Github Stars

Tester Client for Model Context Protocol (MCP)

Actors MCP Client

Implementation of a model context protocol (MCP) client that connects to an MCP server using Server-Sent Events (SSE) and displays the conversation in a chat-like UI. It is a standalone Actor server designed for testing MCP servers over SSE. It uses Pay-per-event pricing model.

For more information, see the Model Context Protocol website or blogpost What is MCP and why does it matter?.

Once you run the Actor, check the output or logs for a link to the chat UI interface to interact with the MCP server. The URL will look like this and will vary each run:

Navigate to https://...apify.net to interact with chat-ui interface.

🚀 Main features

  • 🔌 Connects to an MCP server using Server-Sent Events (SSE)
  • 💬 Provides a chat-like UI for displaying tool calls and results
  • 🇦 Connects to an Apify MCP Server for interacting with one or more Apify Actors
  • 💥 Dynamically uses tools based on context and user queries (if supported by a server)
  • 🔓 Use Authorization headers and API keys for secure connections
  • 🪟 Open source, so you can review it, suggest improvements, or modify it

🎯 What does Tester MCP Client do?

When connected to Actors-MCP-Server the Tester MCP Client provides an interactive chat interface where you can:

  • "What are the most popular Actors for social media scraping?"
  • "Show me the best way to use the Instagram Scraper"
  • "Which Actor should I use to extract data from LinkedIn?"
  • "Can you help me understand how to scrape Google search results?"

Tester-MCP-client-screenshot

📖 How does it work?

The Apify MCP Client connects to a running MCP server over Server-Sent Events (SSE) and it does the following:

  • Initiates an SSE connection to the MCP server /sse.
  • Sends user queries to the MCP server via POST /message.
  • Receives real-time streamed responses (via GET /sse) that may include LLM output, and tool usage blocks
  • Based on the LLM response, orchestrates tool calls and displays the conversation
  • Displays the conversation

⚙️ Usage

Normal Mode (on Apify)

You can run the Tester MCP Client on Apify and connect it to any MCP server that supports SSE. Configuration can be done via the Apify UI or API by specifying parameters such as the MCP server URL, system prompt, and API key.

Once you run Actor, check the logs for a link to the Tester MCP Client UI, where you can interact with the MCP server: The URL will look like this and will be different from run to run:

INFO  Navigate to https://......runs.apify.net in your browser to interact with an MCP server.

Standby Mode (on Apify)

In progress 🚧

💰 Pricing

The Apify MCP Client is free to use. You only pay for LLM provider usage and resources consumed on the Apify platform.

This Actor uses a modern and flexible approach for AI Agents monetization and pricing called Pay-per-event.

Events charged:

  • Actor start (based on memory used, charged per 128 MB unit)
  • Running time (charged every 5 minutes, per 128 MB unit)
  • Query answered (depends on the model used, not charged if you provide your own API key for LLM provider)

When you use your own LLM provider API key, running the MCP Client for 1 hour with 128 MB memory costs approximately $0.06. With the Apify Free tier (no credit card required 💳), you can run the MCP Client for 80 hours per month. Definitely enough to test your MCP server!

📖 How it works

Browser ← (SSE) → Tester MCP Client  ← (SSE) → MCP Server

We create this chain to keep any custom bridging logic inside the Tester MCP Client, while leaving the main MCP Server unchanged. The browser uses SSE to communicate with the Tester MCP Client, and the Tester MCP Client relies on SSE to talk to the MCP Server. This separates extra client-side logic from the core server, making it easier to maintain and debug.

  1. Navigate to https://tester-mcp-client.apify.actor?token=YOUR-API-TOKEN (or http://localhost:3000 if you are running it locally).
  2. Files index.html and client.js are served from the public/ directory.
  3. Browser opens SSE stream via GET /sse.
  4. The user's query is sent with POST /message.
  5. Query processing:
    • Calls Large Language Model.
    • Optionally calls tools if required using
  6. For each result chunk, sseEmit(role, content)

Local development

The Tester MCP Client Actor is open source and available on GitHub, allowing you to modify and develop it as needed.

Download the source code:

git clone https://github.com/apify/tester-mcp-client.git
cd tester-mcp-client

Install the dependencies:

npm install

Create a .env file with the following content (refer to the .env.example file for guidance):

APIFY_TOKEN=YOUR_APIFY_TOKEN
LLM_PROVIDER_API_KEY=YOUR_API_KEY

Default values for settings such as mcpSseUrl, systemPrompt, and others are defined in the const.ts file. You can adjust these as needed for your development.

Run the client locally

npm start

Navigate to http://localhost:3000 in your browser to interact with the MCP server.

Happy chatting with Apify Actors!

ⓘ Limitations and feedback

The client does not support all MCP features, such as Prompts and Resource. Also, it does not store the conversation, so refreshing the page will clear the chat history.

References

相关推荐

  • Joshua Armstrong
  • Confidential guide on numerology and astrology, based of GG33 Public information

  • https://suefel.com
  • Latest advice and best practices for custom GPT development.

  • Emmet Halm
  • Converts Figma frames into front-end code for various mobile frameworks.

  • Elijah Ng Shi Yi
  • Advanced software engineer GPT that excels through nailing the basics.

  • lumpenspace
  • Take an adjectivised noun, and create images making it progressively more adjective!

  • https://maiplestudio.com
  • Find Exhibitors, Speakers and more

  • Yusuf Emre Yeşilyurt
  • I find academic articles and books for research and literature reviews.

  • Carlos Ferrin
  • Encuentra películas y series en plataformas de streaming.

  • https://zenepic.net
  • Embark on a thrilling diplomatic quest across a galaxy on the brink of war. Navigate complex politics and alien cultures to forge peace and avert catastrophe in this immersive interstellar adventure.

  • apappascs
  • Descubra la colección más completa y actualizada de servidores MCP en el mercado. Este repositorio sirve como un centro centralizado, que ofrece un extenso catálogo de servidores MCP de código abierto y propietarios, completos con características, enlaces de documentación y colaboradores.

  • ShrimpingIt
  • Manipulación basada en Micrypthon I2C del expansor GPIO de la serie MCP, derivada de AdaFruit_MCP230xx

  • jae-jae
  • Servidor MCP para obtener contenido de la página web con el navegador sin cabeza de dramaturgo.

  • ravitemer
  • Un poderoso complemento Neovim para administrar servidores MCP (protocolo de contexto del modelo)

  • patruff
  • Puente entre los servidores Ollama y MCP, lo que permite a LLM locales utilizar herramientas de protocolo de contexto del modelo

  • pontusab
  • La comunidad de cursor y windsurf, encontrar reglas y MCP

  • av
  • Ejecute sin esfuerzo LLM Backends, API, frontends y servicios con un solo comando.

  • chongdashu
  • Habilite clientes asistentes de IA como Cursor, Windsurf y Claude Desktop para controlar el motor irreal a través del lenguaje natural utilizando el Protocolo de contexto del modelo (MCP).

  • Mintplex-Labs
  • La aplicación AI de escritorio todo en uno y Docker con trapo incorporado, agentes de IA, creador de agentes sin código, compatibilidad de MCP y más.

    Reviews

    1 (1)
    Avatar
    user_Y1aDF869
    2025-04-17

    As a devoted user of the tester-mcp-client by apify, I've found it incredibly reliable and efficient for managing my multi-cloud projects. The seamless integration and user-friendly interface have significantly streamlined my workflow. Highly recommend to anyone in need of a robust multi-cloud management tool. Check it out on GitHub for more details!