MCP cover image
See in Github
2025-04-09

将模型上下文协议(MCP)工具与langchain.js应用程序集成的适配器,支持STDIO和SSE运输。

3

Github Watches

19

Github Forks

148

Github Stars

LangChain.js MCP Adapters

npm version License: MIT

This library provides a lightweight wrapper that makes Anthropic Model Context Protocol (MCP) tools compatible with LangChain.js and LangGraph.js.

Features

  • 🔌 Transport Options

    • Connect to MCP servers via stdio (local) or SSE (remote)
    • Support for custom headers in SSE connections for authentication
    • Configurable reconnection strategies for both transport types
  • 🔄 Multi-Server Management

    • Connect to multiple MCP servers simultaneously
    • Auto-organize tools by server or access them as a flattened collection
    • Convenient configuration via JSON file
  • 🧩 Agent Integration

    • Compatible with LangChain.js and LangGraph.js
    • Optimized for OpenAI, Anthropic, and Google models
    • Supports rich content responses including text, images, and embedded resources
  • 🛠️ Development Features

    • Uses debug package for debug logging
    • Flexible configuration options
    • Robust error handling

Installation

npm install @langchain/mcp-adapters

Optional Dependencies

For SSE connections with custom headers in Node.js:

npm install eventsource

For enhanced SSE header support:

npm install extended-eventsource

Example: Manage the MCP Client yourself

This example shows how you can manage your own MCP client and use it to get tools that you can pass to a LangGraph prebuilt ReAcT agent.

npm install @langchain/mcp-adapters @langchain/langgraph @langchain/core @langchain/openai

export OPENAI_API_KEY=<your_api_key>

Client

import { Client } from "@modelcontextprotocol/sdk/client/index.js";
import { StdioClientTransport } from "@modelcontextprotocol/sdk/client/stdio.js";
import { ChatOpenAI } from "@langchain/openai";
import { createReactAgent } from "@langchain/langgraph/prebuilt";
import { loadMcpTools } from "@langchain/mcp-adapters";

// Initialize the ChatOpenAI model
const model = new ChatOpenAI({ modelName: "gpt-4" });

// Automatically starts and connects to a MCP reference server
const transport = new StdioClientTransport({
  command: "npx",
  args: ["-y", "@modelcontextprotocol/server-math"],
});

// Initialize the client
const client = new Client({
  name: "math-client",
  version: "1.0.0",
});

try {
  // Connect to the transport
  await client.connect(transport);

  // Get tools with custom configuration
  const tools = await loadMcpTools("math", client, {
    // Whether to throw errors if a tool fails to load (optional, default: true)
    throwOnLoadError: true,
    // Whether to prefix tool names with the server name (optional, default: false)
    prefixToolNameWithServerName: false,
    // Optional additional prefix for tool names (optional, default: "")
    additionalToolNamePrefix: "",
  });

  // Create and run the agent
  const agent = createReactAgent({ llm: model, tools });
  const agentResponse = await agent.invoke({
    messages: [{ role: "user", content: "what's (3 + 5) x 12?" }],
  });
  console.log(agentResponse);
} catch (e) {
  console.error(e);
} finally {
  // Clean up connection
  await client.close();
}

Example: Connect to one or more servers via config

The library also allows you to connect to multiple MCP servers and load tools from them:

Client

import { MultiServerMCPClient } from "@langchain/mcp-adapters";
import { ChatOpenAI } from "@langchain/openai";
import { createReactAgent } from "@langchain/langgraph/prebuilt";

// Create client and connect to server
const client = new MultiServerMCPClient({
  // Global tool configuration options
  // Whether to throw on errors if a tool fails to load (optional, default: true)
  throwOnLoadError: true,
  // Whether to prefix tool names with the server name (optional, default: true)
  prefixToolNameWithServerName: true,
  // Optional additional prefix for tool names (optional, default: "mcp")
  additionalToolNamePrefix: "mcp",

  // Server configuration
  mcpServers: {
    // adds a STDIO connection to a server named "math"
    math: {
      transport: "stdio",
      command: "npx",
      args: ["-y", "@modelcontextprotocol/server-math"],
      // Restart configuration for stdio transport
      restart: {
        enabled: true,
        maxAttempts: 3,
        delayMs: 1000,
      },
    },

    // here's a filesystem server
    filesystem: {
      transport: "stdio",
      command: "npx",
      args: ["-y", "@modelcontextprotocol/server-filesystem"],
    },

    // SSE transport example with reconnection configuration
    weather: {
      transport: "sse",
      url: "https://example.com/mcp-weather",
      headers: {
        Authorization: "Bearer token123",
      },
      useNodeEventSource: true,
      reconnect: {
        enabled: true,
        maxAttempts: 5,
        delayMs: 2000,
      },
    },
  },
});

const tools = await client.getTools();

// Create an OpenAI model
const model = new ChatOpenAI({
  modelName: "gpt-4o",
  temperature: 0,
});

// Create the React agent
const agent = createReactAgent({
  llm: model,
  tools,
});

// Run the agent
try {
  const mathResponse = await agent.invoke({
    messages: [{ role: "user", content: "what's (3 + 5) x 12?" }],
  });
  console.log(mathResponse);
} catch (error) {
  console.error("Error during agent execution:", error);
  // Tools throw ToolException for tool-specific errors
  if (error.name === "ToolException") {
    console.error("Tool execution failed:", error.message);
  }
}

await client.close();

For more detailed examples, see the examples directory.

Tool Configuration Options

When loading MCP tools either directly through loadMcpTools or via MultiServerMCPClient, you can configure the following options:

Option Type Default Description
throwOnLoadError boolean true Whether to throw an error if a tool fails to load
prefixToolNameWithServerName boolean false If true, prefixes all tool names with the server name (e.g., serverName__toolName)
additionalToolNamePrefix string "" Additional prefix to add to tool names (e.g., prefix__serverName__toolName)

Response Handling

MCP tools return results in the content_and_artifact format which can include:

  • Text content: Plain text responses
  • Image content: Base64-encoded images with MIME type
  • Embedded resources: Files, structured data, or other resources

Example for handling different content types:

const tool = tools.find((t) => t.name === "mcp__math__calculate");
const result = await tool.invoke({ expression: "(3 + 5) * 12" });

// Result format: [content, artifacts]
// - content: string | MessageContentComplex[]
// - artifacts: EmbeddedResource[]

const [textContent, artifacts] = result;

// Handle text content
if (typeof textContent === "string") {
  console.log("Result:", textContent);
} else {
  // Handle complex content (text + images)
  textContent.forEach((item) => {
    if (item.type === "text") {
      console.log("Text:", item.text);
    } else if (item.type === "image_url") {
      console.log("Image URL:", item.image_url.url);
    }
  });
}

// Handle artifacts if needed
if (artifacts.length > 0) {
  console.log("Received artifacts:", artifacts);
}

Reconnection Strategies

Both transport types support automatic reconnection:

Stdio Transport Restart

{
  transport: "stdio",
  command: "npx",
  args: ["-y", "@modelcontextprotocol/server-math"],
  restart: {
    enabled: true,      // Enable automatic restart
    maxAttempts: 3,     // Maximum restart attempts
    delayMs: 1000       // Delay between attempts in ms
  }
}

SSE Transport Reconnect

{
  transport: "sse",
  url: "https://example.com/mcp-server",
  headers: { "Authorization": "Bearer token123" },
  useNodeEventSource: true,
  reconnect: {
    enabled: true,      // Enable automatic reconnection
    maxAttempts: 5,     // Maximum reconnection attempts
    delayMs: 2000       // Delay between attempts in ms
  }
}

Error Handling

The library provides different error types to help with debugging:

  • MCPClientError: For client connection and initialization issues
  • ToolException: For errors during tool execution
  • ZodError: For configuration validation errors (invalid connection settings, etc.)

Example error handling:

try {
  const client = new MultiServerMCPClient({
    math: {
      transport: "stdio",
      command: "npx",
      args: ["-y", "@modelcontextprotocol/server-math"],
    },
  });

  const tools = await client.getTools();
  const result = await tools[0].invoke({ expression: "1 + 2" });
} catch (error) {
  if (error.name === "MCPClientError") {
    // Handle connection issues
    console.error(`Connection error (${error.serverName}):`, error.message);
  } else if (error.name === "ToolException") {
    // Handle tool execution errors
    console.error("Tool execution failed:", error.message);
  } else if (error.name === "ZodError") {
    // Handle configuration validation errors
    console.error("Configuration error:", error.issues);
    // Zod errors contain detailed information about what went wrong
    error.issues.forEach((issue) => {
      console.error(`- Path: ${issue.path.join(".")}, Error: ${issue.message}`);
    });
  } else {
    // Handle other errors
    console.error("Unexpected error:", error);
  }
}

Common Zod Validation Errors

The library uses Zod for validating configuration. Here are some common validation errors:

  • Missing required parameters: For example, omitting command for stdio transport or url for SSE transport
  • Invalid parameter types: For example, providing a number where a string is expected
  • Invalid connection configuration: For example, using an invalid URL format for SSE transport

Example Zod error for an invalid SSE URL:

{
  "issues": [
    {
      "code": "invalid_string",
      "validation": "url",
      "path": ["mcpServers", "weather", "url"],
      "message": "Invalid url"
    }
  ],
  "name": "ZodError"
}

Browser Environments

When using in browsers:

  • Native EventSource API doesn't support custom headers
  • Consider using a proxy or pass authentication via query parameters
  • May require CORS configuration on the server side

Troubleshooting

Common Issues

  1. Connection Failures:

    • Verify the MCP server is running
    • Check command paths and network connectivity
  2. Tool Execution Errors:

    • Examine server logs for error messages
    • Ensure input parameters match the expected schema
  3. Headers Not Applied:

    • Install the recommended extended-eventsource package
    • Set useNodeEventSource: true in SSE connections

Debug Logging

This package makes use of the debug package for debug logging.

Logging is disabled by default, and can be enabled by setting the DEBUG environment variable as per the instructions in the debug package.

To output all debug logs from this package:

DEBUG='@langchain/mcp-adapters:*'

To output debug logs only from the client module:

DEBUG='@langchain/mcp-adapters:client'

To output debug logs only from the tools module:

DEBUG='@langchain/mcp-adapters:tools'

License

MIT

Acknowledgements

Big thanks to @vrknetha, @cawstudios for the initial implementation!

Contributing

Contributions are welcome! Please check out our contributing guidelines for more information.

相关推荐

  • https://suefel.com
  • Latest advice and best practices for custom GPT development.

  • Yusuf Emre Yeşilyurt
  • I find academic articles and books for research and literature reviews.

  • https://maiplestudio.com
  • Find Exhibitors, Speakers and more

  • Carlos Ferrin
  • Encuentra películas y series en plataformas de streaming.

  • Joshua Armstrong
  • Confidential guide on numerology and astrology, based of GG33 Public information

  • Emmet Halm
  • Converts Figma frames into front-end code for various mobile frameworks.

  • Alexandru Strujac
  • Efficient thumbnail creator for YouTube videos

  • Elijah Ng Shi Yi
  • Advanced software engineer GPT that excels through nailing the basics.

  • lumpenspace
  • Take an adjectivised noun, and create images making it progressively more adjective!

  • Lists Tailwind CSS classes in monospaced font

  • https://appia.in
  • Siri Shortcut Finder – your go-to place for discovering amazing Siri Shortcuts with ease

  • apappascs
  • 发现市场上最全面,最新的MCP服务器集合。该存储库充当集中式枢纽,提供了广泛的开源和专有MCP服务器目录,并提供功能,文档链接和贡献者。

  • jae-jae
  • MCP服务器使用剧作《无头浏览器》获取网页内容。

  • ravitemer
  • 一个功能强大的Neovim插件,用于管理MCP(模型上下文协议)服务器

  • Mintplex-Labs
  • 带有内置抹布,AI代理,无代理构建器,MCP兼容性等的多合一桌面和Docker AI应用程序。

  • patruff
  • Ollama和MCP服务器之间的桥梁,使本地LLMS可以使用模型上下文协议工具

  • pontusab
  • 光标与风浪冲浪社区,查找规则和MCP

  • WangRongsheng
  • 🧑‍🚀 llm 资料总结(数据处理、模型训练、模型部署、 o1 模型、mcp 、小语言模型、视觉语言模型)|摘要世界上最好的LLM资源。

  • av
  • 毫不费力地使用一个命令运行LLM后端,API,前端和服务。

  • n8n-io
  • 具有本机AI功能的公平代码工作流程自动化平台。将视觉构建与自定义代码,自宿主或云相结合,400+集成。

    Reviews

    2 (1)
    Avatar
    user_bGT3i12D
    2025-04-17

    As a devoted user of MCP applications, I find the langchainjs-mcp-adapters by langchain-ai to be an invaluable tool. It seamlessly integrates with MCP, enhancing its capabilities and providing robust adapter solutions. The intuitive design and comprehensive support make it a must-have for anyone looking to leverage MCP in their projects. Highly recommended!