Cover image

Python SDK与数据库的MCP工具箱进行交互。

3 years

Works with Finder

5

Github Watches

11

Github Forks

37

Github Stars

MCP Toolbox Logo

MCP Toolbox LangChain SDK

This SDK allows you to seamlessly integrate the functionalities of Toolbox into your LangChain LLM applications, enabling advanced orchestration and interaction with GenAI models.

Table of Contents

Installation

pip install toolbox-langchain

Quickstart

Here's a minimal example to get you started using LangGraph:

from toolbox_langchain import ToolboxClient
from langchain_google_vertexai import ChatVertexAI
from langgraph.prebuilt import create_react_agent

toolbox = ToolboxClient("http://127.0.0.1:5000")
tools = toolbox.load_toolset()

model = ChatVertexAI(model="gemini-1.5-pro-002")
agent = create_react_agent(model, tools)

prompt = "How's the weather today?"

for s in agent.stream({"messages": [("user", prompt)]}, stream_mode="values"):
    message = s["messages"][-1]
    if isinstance(message, tuple):
        print(message)
    else:
        message.pretty_print()

Usage

Import and initialize the toolbox client.

from toolbox_langchain import ToolboxClient

# Replace with your Toolbox service's URL
toolbox = ToolboxClient("http://127.0.0.1:5000")

Loading Tools

Load a toolset

A toolset is a collection of related tools. You can load all tools in a toolset or a specific one:

# Load all tools
tools = toolbox.load_toolset()

# Load a specific toolset
tools = toolbox.load_toolset("my-toolset")

Load a single tool

tool = toolbox.load_tool("my-tool")

Loading individual tools gives you finer-grained control over which tools are available to your LLM agent.

Use with LangChain

LangChain's agents can dynamically choose and execute tools based on the user input. Include tools loaded from the Toolbox SDK in the agent's toolkit:

from langchain_google_vertexai import ChatVertexAI

model = ChatVertexAI(model="gemini-1.5-pro-002")

# Initialize agent with tools
agent = model.bind_tools(tools)

# Run the agent
result = agent.invoke("Do something with the tools")

Use with LangGraph

Integrate the Toolbox SDK with LangGraph to use Toolbox service tools within a graph-based workflow. Follow the official guide with minimal changes.

Represent Tools as Nodes

Represent each tool as a LangGraph node, encapsulating the tool's execution within the node's functionality:

from toolbox_langchain import ToolboxClient
from langgraph.graph import StateGraph, MessagesState
from langgraph.prebuilt import ToolNode

# Define the function that calls the model
def call_model(state: MessagesState):
    messages = state['messages']
    response = model.invoke(messages)
    return {"messages": [response]}  # Return a list to add to existing messages

model = ChatVertexAI(model="gemini-1.5-pro-002")
builder = StateGraph(MessagesState)
tool_node = ToolNode(tools)

builder.add_node("agent", call_model)
builder.add_node("tools", tool_node)

Connect Tools with LLM

Connect tool nodes with LLM nodes. The LLM decides which tool to use based on input or context. Tool output can be fed back into the LLM:

from typing import Literal
from langgraph.graph import END, START
from langchain_core.messages import HumanMessage

# Define the function that determines whether to continue or not
def should_continue(state: MessagesState) -> Literal["tools", END]:
    messages = state['messages']
    last_message = messages[-1]
    if last_message.tool_calls:
        return "tools"  # Route to "tools" node if LLM makes a tool call
    return END  # Otherwise, stop

builder.add_edge(START, "agent")
builder.add_conditional_edges("agent", should_continue)
builder.add_edge("tools", 'agent')

graph = builder.compile()

graph.invoke({"messages": [HumanMessage(content="Do something with the tools")]})

Manual usage

Execute a tool manually using the invoke method:

result = tools[0].invoke({"name": "Alice", "age": 30})

This is useful for testing tools or when you need precise control over tool execution outside of an agent framework.

Authenticating Tools

[!WARNING] Always use HTTPS to connect your application with the Toolbox service, especially when using tools with authentication configured. Using HTTP exposes your application to serious security risks.

Some tools require user authentication to access sensitive data.

Supported Authentication Mechanisms

Toolbox currently supports authentication using the OIDC protocol with ID tokens (not access tokens) for Google OAuth 2.0.

Configure Tools

Refer to these instructions on configuring tools for authenticated parameters.

Configure SDK

You need a method to retrieve an ID token from your authentication service:

async def get_auth_token():
    # ... Logic to retrieve ID token (e.g., from local storage, OAuth flow)
    # This example just returns a placeholder. Replace with your actual token retrieval.
    return "YOUR_ID_TOKEN" # Placeholder

Add Authentication to a Tool

toolbox = ToolboxClient("http://127.0.0.1:5000")
tools = toolbox.load_toolset()

auth_tool = tools[0].add_auth_token("my_auth", get_auth_token) # Single token

multi_auth_tool = tools[0].add_auth_tokens({"my_auth", get_auth_token}) # Multiple tokens

# OR

auth_tools = [tool.add_auth_token("my_auth", get_auth_token) for tool in tools]

Add Authentication While Loading

auth_tool = toolbox.load_tool(auth_tokens={"my_auth": get_auth_token})

auth_tools = toolbox.load_toolset(auth_tokens={"my_auth": get_auth_token})

[!NOTE] Adding auth tokens during loading only affect the tools loaded within that call.

Complete Example

import asyncio
from toolbox_langchain import ToolboxClient

async def get_auth_token():
    # ... Logic to retrieve ID token (e.g., from local storage, OAuth flow)
    # This example just returns a placeholder. Replace with your actual token retrieval.
    return "YOUR_ID_TOKEN" # Placeholder

toolbox = ToolboxClient("http://127.0.0.1:5000")
tool = toolbox.load_tool("my-tool")

auth_tool = tool.add_auth_token("my_auth", get_auth_token)
result = auth_tool.invoke({"input": "some input"})
print(result)

Binding Parameter Values

Predetermine values for tool parameters using the SDK. These values won't be modified by the LLM. This is useful for:

  • Protecting sensitive information: API keys, secrets, etc.
  • Enforcing consistency: Ensuring specific values for certain parameters.
  • Pre-filling known data: Providing defaults or context.

Binding Parameters to a Tool

toolbox = ToolboxClient("http://127.0.0.1:5000")
tools = toolbox.load_toolset()

bound_tool = tool[0].bind_param("param", "value") # Single param

multi_bound_tool = tools[0].bind_params({"param1": "value1", "param2": "value2"}) # Multiple params

# OR

bound_tools = [tool.bind_param("param", "value") for tool in tools]

Binding Parameters While Loading

bound_tool = toolbox.load_tool("my-tool", bound_params={"param": "value"})

bound_tools = toolbox.load_toolset(bound_params={"param": "value"})

[!NOTE] Bound values during loading only affect the tools loaded in that call.

Binding Dynamic Values

Use a function to bind dynamic values:

def get_dynamic_value():
  # Logic to determine the value
  return "dynamic_value"

dynamic_bound_tool = tool.bind_param("param", get_dynamic_value)

[!IMPORTANT] You don't need to modify tool configurations to bind parameter values.

Asynchronous Usage

For better performance through cooperative multitasking, you can use the asynchronous interfaces of the ToolboxClient.

[!Note] Asynchronous interfaces like aload_tool and aload_toolset require an asynchronous environment. For guidance on running asynchronous Python programs, see asyncio documentation.

import asyncio
from toolbox_langchain import ToolboxClient

async def main():
    toolbox = ToolboxClient("http://127.0.0.1:5000")
    tool = await client.aload_tool("my-tool")
    tools = await client.aload_toolset()
    response = await tool.ainvoke()

if __name__ == "__main__":
    asyncio.run(main())

相关推荐

  • Joshua Armstrong
  • Confidential guide on numerology and astrology, based of GG33 Public information

  • https://suefel.com
  • Latest advice and best practices for custom GPT development.

  • Alexandru Strujac
  • Efficient thumbnail creator for YouTube videos

  • Emmet Halm
  • Converts Figma frames into front-end code for various mobile frameworks.

  • Elijah Ng Shi Yi
  • Advanced software engineer GPT that excels through nailing the basics.

  • lumpenspace
  • Take an adjectivised noun, and create images making it progressively more adjective!

  • https://maiplestudio.com
  • Find Exhibitors, Speakers and more

  • Lists Tailwind CSS classes in monospaced font

  • tomoyoshi hirata
  • Sony α7IIIマニュアルアシスタント

  • Yusuf Emre Yeşilyurt
  • I find academic articles and books for research and literature reviews.

  • Carlos Ferrin
  • Encuentra películas y series en plataformas de streaming.

  • apappascs
  • 发现市场上最全面,最新的MCP服务器集合。该存储库充当集中式枢纽,提供了广泛的开源和专有MCP服务器目录,并提供功能,文档链接和贡献者。

  • ShrimpingIt
  • MCP系列GPIO Expander的基于Micropython I2C的操作,源自ADAFRUIT_MCP230XX

  • jae-jae
  • MCP服务器使用剧作《无头浏览器》获取网页内容。

  • ravitemer
  • 一个功能强大的Neovim插件,用于管理MCP(模型上下文协议)服务器

  • patruff
  • Ollama和MCP服务器之间的桥梁,使本地LLMS可以使用模型上下文协议工具

  • pontusab
  • 光标与风浪冲浪社区,查找规则和MCP

  • av
  • 毫不费力地使用一个命令运行LLM后端,API,前端和服务。

  • Mintplex-Labs
  • 带有内置抹布,AI代理,无代理构建器,MCP兼容性等的多合一桌面和Docker AI应用程序。

  • WangRongsheng
  • 🧑‍🚀 llm 资料总结(数据处理、模型训练、模型部署、 o1 模型、mcp 、小语言模型、视觉语言模型)|摘要世界上最好的LLM资源。

  • appcypher
  • 很棒的MCP服务器 - 模型上下文协议服务器的策划列表

    Reviews

    4 (1)
    Avatar
    user_VFSevgLe
    2025-04-17

    As a dedicated user of the mcp-toolbox-sdk-python, I must say it's an indispensable tool for Python developers. Created by googleapis, it offers seamless integration and robust functionality for managing MCP applications. The documentation is comprehensive, and the community support is fantastic. I highly recommend checking it out on GitHub at https://github.com/googleapis/mcp-toolbox-sdk-python.