Advanced software engineer GPT that excels through nailing the basics.

langchain-mcp-adaptadores
3 years
Works with Finder
14
Github Watches
99
Github Forks
1.2k
Github Stars
LangChain MCP Adapters
This library provides a lightweight wrapper that makes Anthropic Model Context Protocol (MCP) tools compatible with LangChain and LangGraph.
Features
- 🛠️ Convert MCP tools into LangChain tools that can be used with LangGraph agents
- 📦 A client implementation that allows you to connect to multiple MCP servers and load tools from them
Installation
pip install langchain-mcp-adapters
Quickstart
Here is a simple example of using the MCP tools with a LangGraph agent.
pip install langchain-mcp-adapters langgraph langchain-openai
export OPENAI_API_KEY=<your_api_key>
Server
First, let's create an MCP server that can add and multiply numbers.
# math_server.py
from mcp.server.fastmcp import FastMCP
mcp = FastMCP("Math")
@mcp.tool()
def add(a: int, b: int) -> int:
"""Add two numbers"""
return a + b
@mcp.tool()
def multiply(a: int, b: int) -> int:
"""Multiply two numbers"""
return a * b
if __name__ == "__main__":
mcp.run(transport="stdio")
Client
# Create server parameters for stdio connection
from mcp import ClientSession, StdioServerParameters
from mcp.client.stdio import stdio_client
from langchain_mcp_adapters.tools import load_mcp_tools
from langgraph.prebuilt import create_react_agent
from langchain_openai import ChatOpenAI
model = ChatOpenAI(model="gpt-4o")
server_params = StdioServerParameters(
command="python",
# Make sure to update to the full absolute path to your math_server.py file
args=["/path/to/math_server.py"],
)
async with stdio_client(server_params) as (read, write):
async with ClientSession(read, write) as session:
# Initialize the connection
await session.initialize()
# Get tools
tools = await load_mcp_tools(session)
# Create and run the agent
agent = create_react_agent(model, tools)
agent_response = await agent.ainvoke({"messages": "what's (3 + 5) x 12?"})
Multiple MCP Servers
The library also allows you to connect to multiple MCP servers and load tools from them:
Server
# math_server.py
...
# weather_server.py
from typing import List
from mcp.server.fastmcp import FastMCP
mcp = FastMCP("Weather")
@mcp.tool()
async def get_weather(location: str) -> str:
"""Get weather for location."""
return "It's always sunny in New York"
if __name__ == "__main__":
mcp.run(transport="sse")
python weather_server.py
Client
from langchain_mcp_adapters.client import MultiServerMCPClient
from langgraph.prebuilt import create_react_agent
from langchain_openai import ChatOpenAI
model = ChatOpenAI(model="gpt-4o")
async with MultiServerMCPClient(
{
"math": {
"command": "python",
# Make sure to update to the full absolute path to your math_server.py file
"args": ["/path/to/math_server.py"],
"transport": "stdio",
},
"weather": {
# make sure you start your weather server on port 8000
"url": "http://localhost:8000/sse",
"transport": "sse",
}
}
) as client:
agent = create_react_agent(model, client.get_tools())
math_response = await agent.ainvoke({"messages": "what's (3 + 5) x 12?"})
weather_response = await agent.ainvoke({"messages": "what is the weather in nyc?"})
Using with LangGraph API Server
[!TIP] Check out this guide on getting started with LangGraph API server.
If you want to run a LangGraph agent that uses MCP tools in a LangGraph API server, you can use the following setup:
# graph.py
from contextlib import asynccontextmanager
from langchain_mcp_adapters.client import MultiServerMCPClient
from langgraph.prebuilt import create_react_agent
from langchain_anthropic import ChatAnthropic
model = ChatAnthropic(model="claude-3-5-sonnet-latest")
@asynccontextmanager
async def make_graph():
async with MultiServerMCPClient(
{
"math": {
"command": "python",
# Make sure to update to the full absolute path to your math_server.py file
"args": ["/path/to/math_server.py"],
"transport": "stdio",
},
"weather": {
# make sure you start your weather server on port 8000
"url": "http://localhost:8000/sse",
"transport": "sse",
}
}
) as client:
agent = create_react_agent(model, client.get_tools())
yield agent
In your langgraph.json
make sure to specify make_graph
as your graph entrypoint:
{
"dependencies": ["."],
"graphs": {
"agent": "./graph.py:make_graph"
}
}
相关推荐
Embark on a thrilling diplomatic quest across a galaxy on the brink of war. Navigate complex politics and alien cultures to forge peace and avert catastrophe in this immersive interstellar adventure.
OrchestratorofexpertagentsincybersecurityandOSINT
FindetundanalysiertOnlineProdukteeinschlielichAmazonnachVolumenBewertungenundPreis
🔥 1Panel proporciona una interfaz web intuitiva y un servidor MCP para administrar sitios web, archivos, contenedores, bases de datos y LLM en un servidor de Linux.
Servidores AWS MCP: servidores MCP especializados que traen las mejores prácticas de AWS directamente a su flujo de trabajo de desarrollo
🧑🚀 全世界最好的 llM 资料总结(数据处理、模型训练、模型部署、 O1 模型、 MCP 、小语言模型、视觉语言模型) | Resumen de los mejores recursos del mundo.
Servidores MCP impresionantes: una lista curada de servidores de protocolo de contexto del modelo
Servidor MCP para proporcionar información de diseño de figma a agentes de codificación de IA como Cursor
Traducción de papel científico en PDF con formatos preservados - 基于 Ai 完整保留排版的 PDF 文档全文双语翻译 , 支持 支持 支持 支持 支持 支持 支持 支持 支持 支持 支持 支持 等服务 等服务 等服务 提供 提供 提供 提供 提供 提供 提供 提供 提供 提供 提供 提供 cli/mcp/docker/zotero
⛓️Rulego es un marco de motor de regla de orquestación de componentes de alta generación de alto rendimiento, de alto rendimiento y de alto rendimiento para GO.
Plataforma de automatización de flujo de trabajo de código justo con capacidades de IA nativas. Combine el edificio visual con código personalizado, auto-anfitrión o nube, más de 400 integraciones.
Reviews

user_QVK6BFDN
As a loyal MCP application user, I must say that langchain-mcp-adapters by langchain-ai significantly enhances the integration experience. The seamless connection it provides between various components makes workflows smoother and more efficient. I've been impressed by the simplicity of setup and the robust performance. Highly recommend checking out the project on GitHub for more details!