Cover image

Python SDK pour interagir avec la boîte à outils MCP pour les bases de données.

3 years

Works with Finder

5

Github Watches

11

Github Forks

37

Github Stars

MCP Toolbox Logo

MCP Toolbox LangChain SDK

This SDK allows you to seamlessly integrate the functionalities of Toolbox into your LangChain LLM applications, enabling advanced orchestration and interaction with GenAI models.

Table of Contents

Installation

pip install toolbox-langchain

Quickstart

Here's a minimal example to get you started using LangGraph:

from toolbox_langchain import ToolboxClient
from langchain_google_vertexai import ChatVertexAI
from langgraph.prebuilt import create_react_agent

toolbox = ToolboxClient("http://127.0.0.1:5000")
tools = toolbox.load_toolset()

model = ChatVertexAI(model="gemini-1.5-pro-002")
agent = create_react_agent(model, tools)

prompt = "How's the weather today?"

for s in agent.stream({"messages": [("user", prompt)]}, stream_mode="values"):
    message = s["messages"][-1]
    if isinstance(message, tuple):
        print(message)
    else:
        message.pretty_print()

Usage

Import and initialize the toolbox client.

from toolbox_langchain import ToolboxClient

# Replace with your Toolbox service's URL
toolbox = ToolboxClient("http://127.0.0.1:5000")

Loading Tools

Load a toolset

A toolset is a collection of related tools. You can load all tools in a toolset or a specific one:

# Load all tools
tools = toolbox.load_toolset()

# Load a specific toolset
tools = toolbox.load_toolset("my-toolset")

Load a single tool

tool = toolbox.load_tool("my-tool")

Loading individual tools gives you finer-grained control over which tools are available to your LLM agent.

Use with LangChain

LangChain's agents can dynamically choose and execute tools based on the user input. Include tools loaded from the Toolbox SDK in the agent's toolkit:

from langchain_google_vertexai import ChatVertexAI

model = ChatVertexAI(model="gemini-1.5-pro-002")

# Initialize agent with tools
agent = model.bind_tools(tools)

# Run the agent
result = agent.invoke("Do something with the tools")

Use with LangGraph

Integrate the Toolbox SDK with LangGraph to use Toolbox service tools within a graph-based workflow. Follow the official guide with minimal changes.

Represent Tools as Nodes

Represent each tool as a LangGraph node, encapsulating the tool's execution within the node's functionality:

from toolbox_langchain import ToolboxClient
from langgraph.graph import StateGraph, MessagesState
from langgraph.prebuilt import ToolNode

# Define the function that calls the model
def call_model(state: MessagesState):
    messages = state['messages']
    response = model.invoke(messages)
    return {"messages": [response]}  # Return a list to add to existing messages

model = ChatVertexAI(model="gemini-1.5-pro-002")
builder = StateGraph(MessagesState)
tool_node = ToolNode(tools)

builder.add_node("agent", call_model)
builder.add_node("tools", tool_node)

Connect Tools with LLM

Connect tool nodes with LLM nodes. The LLM decides which tool to use based on input or context. Tool output can be fed back into the LLM:

from typing import Literal
from langgraph.graph import END, START
from langchain_core.messages import HumanMessage

# Define the function that determines whether to continue or not
def should_continue(state: MessagesState) -> Literal["tools", END]:
    messages = state['messages']
    last_message = messages[-1]
    if last_message.tool_calls:
        return "tools"  # Route to "tools" node if LLM makes a tool call
    return END  # Otherwise, stop

builder.add_edge(START, "agent")
builder.add_conditional_edges("agent", should_continue)
builder.add_edge("tools", 'agent')

graph = builder.compile()

graph.invoke({"messages": [HumanMessage(content="Do something with the tools")]})

Manual usage

Execute a tool manually using the invoke method:

result = tools[0].invoke({"name": "Alice", "age": 30})

This is useful for testing tools or when you need precise control over tool execution outside of an agent framework.

Authenticating Tools

[!WARNING] Always use HTTPS to connect your application with the Toolbox service, especially when using tools with authentication configured. Using HTTP exposes your application to serious security risks.

Some tools require user authentication to access sensitive data.

Supported Authentication Mechanisms

Toolbox currently supports authentication using the OIDC protocol with ID tokens (not access tokens) for Google OAuth 2.0.

Configure Tools

Refer to these instructions on configuring tools for authenticated parameters.

Configure SDK

You need a method to retrieve an ID token from your authentication service:

async def get_auth_token():
    # ... Logic to retrieve ID token (e.g., from local storage, OAuth flow)
    # This example just returns a placeholder. Replace with your actual token retrieval.
    return "YOUR_ID_TOKEN" # Placeholder

Add Authentication to a Tool

toolbox = ToolboxClient("http://127.0.0.1:5000")
tools = toolbox.load_toolset()

auth_tool = tools[0].add_auth_token("my_auth", get_auth_token) # Single token

multi_auth_tool = tools[0].add_auth_tokens({"my_auth", get_auth_token}) # Multiple tokens

# OR

auth_tools = [tool.add_auth_token("my_auth", get_auth_token) for tool in tools]

Add Authentication While Loading

auth_tool = toolbox.load_tool(auth_tokens={"my_auth": get_auth_token})

auth_tools = toolbox.load_toolset(auth_tokens={"my_auth": get_auth_token})

[!NOTE] Adding auth tokens during loading only affect the tools loaded within that call.

Complete Example

import asyncio
from toolbox_langchain import ToolboxClient

async def get_auth_token():
    # ... Logic to retrieve ID token (e.g., from local storage, OAuth flow)
    # This example just returns a placeholder. Replace with your actual token retrieval.
    return "YOUR_ID_TOKEN" # Placeholder

toolbox = ToolboxClient("http://127.0.0.1:5000")
tool = toolbox.load_tool("my-tool")

auth_tool = tool.add_auth_token("my_auth", get_auth_token)
result = auth_tool.invoke({"input": "some input"})
print(result)

Binding Parameter Values

Predetermine values for tool parameters using the SDK. These values won't be modified by the LLM. This is useful for:

  • Protecting sensitive information: API keys, secrets, etc.
  • Enforcing consistency: Ensuring specific values for certain parameters.
  • Pre-filling known data: Providing defaults or context.

Binding Parameters to a Tool

toolbox = ToolboxClient("http://127.0.0.1:5000")
tools = toolbox.load_toolset()

bound_tool = tool[0].bind_param("param", "value") # Single param

multi_bound_tool = tools[0].bind_params({"param1": "value1", "param2": "value2"}) # Multiple params

# OR

bound_tools = [tool.bind_param("param", "value") for tool in tools]

Binding Parameters While Loading

bound_tool = toolbox.load_tool("my-tool", bound_params={"param": "value"})

bound_tools = toolbox.load_toolset(bound_params={"param": "value"})

[!NOTE] Bound values during loading only affect the tools loaded in that call.

Binding Dynamic Values

Use a function to bind dynamic values:

def get_dynamic_value():
  # Logic to determine the value
  return "dynamic_value"

dynamic_bound_tool = tool.bind_param("param", get_dynamic_value)

[!IMPORTANT] You don't need to modify tool configurations to bind parameter values.

Asynchronous Usage

For better performance through cooperative multitasking, you can use the asynchronous interfaces of the ToolboxClient.

[!Note] Asynchronous interfaces like aload_tool and aload_toolset require an asynchronous environment. For guidance on running asynchronous Python programs, see asyncio documentation.

import asyncio
from toolbox_langchain import ToolboxClient

async def main():
    toolbox = ToolboxClient("http://127.0.0.1:5000")
    tool = await client.aload_tool("my-tool")
    tools = await client.aload_toolset()
    response = await tool.ainvoke()

if __name__ == "__main__":
    asyncio.run(main())

相关推荐

  • Joshua Armstrong
  • Confidential guide on numerology and astrology, based of GG33 Public information

  • https://suefel.com
  • Latest advice and best practices for custom GPT development.

  • Emmet Halm
  • Converts Figma frames into front-end code for various mobile frameworks.

  • Elijah Ng Shi Yi
  • Advanced software engineer GPT that excels through nailing the basics.

  • lumpenspace
  • Take an adjectivised noun, and create images making it progressively more adjective!

  • https://maiplestudio.com
  • Find Exhibitors, Speakers and more

  • Yusuf Emre Yeşilyurt
  • I find academic articles and books for research and literature reviews.

  • Carlos Ferrin
  • Encuentra películas y series en plataformas de streaming.

  • https://zenepic.net
  • Embark on a thrilling diplomatic quest across a galaxy on the brink of war. Navigate complex politics and alien cultures to forge peace and avert catastrophe in this immersive interstellar adventure.

  • apappascs
  • Découvrez la collection la plus complète et la plus à jour de serveurs MCP sur le marché. Ce référentiel sert de centre centralisé, offrant un vaste catalogue de serveurs MCP open-source et propriétaires, avec des fonctionnalités, des liens de documentation et des contributeurs.

  • ShrimpingIt
  • Manipulation basée sur Micropython I2C de l'exposition GPIO de la série MCP, dérivée d'Adafruit_MCP230XX

  • jae-jae
  • MCP Server pour récupérer le contenu de la page Web à l'aide du navigateur sans tête du dramwright.

  • ravitemer
  • Un puissant plugin Neovim pour gérer les serveurs MCP (Protocole de contexte modèle)

  • patruff
  • Pont entre les serveurs Olllama et MCP, permettant aux LLM locaux d'utiliser des outils de protocole de contexte de modèle

  • pontusab
  • La communauté du curseur et de la planche à voile, recherchez des règles et des MCP

  • av
  • Exécutez sans effort LLM Backends, API, Frontends et Services avec une seule commande.

  • appcypher
  • Serveurs MCP géniaux - une liste organisée de serveurs de protocole de contexte de modèle

  • Mintplex-Labs
  • L'application tout-en-un desktop et Docker AI avec chiffon intégré, agents AI, constructeur d'agent sans code, compatibilité MCP, etc.

  • chongdashu
  • Activer les clients adjoints AI comme Cursor, Windsurf et Claude Desktop pour contrôler le moteur Unreal à travers le langage naturel à l'aide du Protocole de contexte modèle (MCP).

    Reviews

    4 (1)
    Avatar
    user_VFSevgLe
    2025-04-17

    As a dedicated user of the mcp-toolbox-sdk-python, I must say it's an indispensable tool for Python developers. Created by googleapis, it offers seamless integration and robust functionality for managing MCP applications. The documentation is comprehensive, and the community support is fantastic. I highly recommend checking it out on GitHub at https://github.com/googleapis/mcp-toolbox-sdk-python.