Cover image
Try Now
2025-04-14

用BAML建造使用LLM,MCP工具和12因子代理原理的结构化生成的建筑代理

3 years

Works with Finder

3

Github Watches

0

Github Forks

3

Github Stars

baml‑agents

Discord License: MIT PyPI version status-prototype Ruff

Building Agents with BAML for structured generation with LLMs, MCP Tools, and 12-Factor Agents principles

This repository shares useful patterns I use when working with BAML. The API is unstable and may change in future versions. Install with:

pip install "baml-agents>=0.8.0,<0.9.0"

Contents

  1. Flexible LLM Client Management in BAML
    • Effortlessly switch between different LLM providers (like OpenAI, Anthropic, Google) at runtime using simple helper functions.
    • Bridge compatibility gaps: Connect to unsupported LLM backends or tracing systems (e.g., Langfuse, LangSmith) via standard proxy setups.
    • Solve common configuration issues: Learn alternatives for managing API keys and client settings if environment variables aren't suitable.
  2. Introduction to AI Tool Use with BAML
    • Learn how to define custom actions (tools) for your AI using Pydantic models, making your agents capable of doing things.
    • See how to integrate these tools with BAML manually or dynamically using ActionRunner for flexible structured outputs.
    • Understand how BAML translates goals into structured LLM calls that select and utilize the appropriate tool.
  3. Integrating Standardized MCP Tools with BAML
    • Discover how to leverage the Model Context Protocol (MCP) to easily plug-and-play pre-built 3rd party tools (like calculators, web search) into your BAML agents.
    • See ActionRunner in action, automatically discovering and integrating tools from MCP servers with minimal configuration.
    • Learn techniques to filter and select specific MCP tools to offer to the LLM, controlling the agent's capabilities precisely.
  4. Interactive BAML Development in Jupyter
    • See BAML's structured data generation stream live into your Jupyter output cell as the LLM generates it.
    • Interactively inspect the details: Use collapsible sections to view full LLM prompts and responses, optionally grouped by call or session, directly in the notebook.
    • Chat with your agent: Interactive chat widget right in the notebook, allowing you to chat with your agent in real-time.
  5. Simple Agent Demonstration
    • Putting it all together: Build a simple, functional agent capable of tackling a multi-step task.
    • Learn how to combine custom Python actions (defined as Action classes) with standardized MCP tools (like calculators or time servers) managed by ActionRunner.
    • Follow the agent's decision-making loop driven by BAML's structured output generation (GetNextAction), see it execute tools, and observe how it uses the results to progress.
    • Includes demonstration of JupyterBamlMonitor for transparent inspection of the underlying LLM interactions.

Simple example

[!TIP] The code below is trimmed for brevity to illustrate the core concepts. Some function names or setup steps may differ slightly from the full notebook implementation for clarity in this example. The full, runnable code is available in the notebook Simple Agent Demonstration (notebooks/05_simple_agent_demo.ipynb)

Show code for the example below
def get_weather_info(city: str):
    return f"The weather in {city} is 63 degrees fahrenheit with cloudy conditions."

def stop_execution(final_answer: str):
    return f"Final answer: {final_answer}"

r = ActionRunner() # Doing an action means using a tool

# Adding a tool to allow the agent to do math
r.add_from_mcp_server(server="uvx mcp-server-calculator")

# Adding a tool to get the current time
r.add_from_mcp_server(server="uvx mcp-timeserver")  # Note: you can also add URLs

# Adding a tool to get the current weather
r.add_action(get_weather_info)

# Adding a tool to let the agent stop execution
r.add_action(stop_execution)

async def execute_task(llm, task: str) -> str:
    interactions = []
    while True:
        action = await llm.GetNextAction(task, interactions)
        if result := is_result_available(action):
            return result

        result = r.run(action)
        interactions.append(new_interaction(action, result))

llm = LLMClient("gpt-4.1-nano")
task = r.execute_task(llm, "State the current date along with avg temp between LA, NY, and Chicago in Fahrenheit.")

BAML Agent execution trace in Jupyter showing LLM prompts and completions

To try it yourself, check out the notebook Simple Agent Demonstration (notebooks/05_simple_agent_demo.ipynb).

Running the Notebooks

To run code from the notebooks/ folder, you'll first need to:

  • Install the uv python package manager.
  • Install all dependencies: uv sync --dev
  • Generate necessary BAML code: uv run baml-cli generate
    • Alternatively, you can use the VSCode extension to do it automatically every time you edit a .baml file.

相关推荐

  • av
  • 毫不费力地使用一个命令运行LLM后端,API,前端和服务。

  • 1Panel-dev
  • 🔥1Panel提供了直观的Web接口和MCP服务器,用于在Linux服务器上管理网站,文件,容器,数据库和LLMS。

  • WangRongsheng
  • 🧑‍🚀 llm 资料总结(数据处理、模型训练、模型部署、 o1 模型、mcp 、小语言模型、视觉语言模型)|摘要世界上最好的LLM资源。

  • Byaidu
  • PDF科学纸翻译带有保留格式的pdf -基于ai完整保留排版的pdf文档全文双语翻译

  • rulego
  • ⛓️Rulego是一种轻巧,高性能,嵌入式,下一代组件编排规则引擎框架。

  • sigoden
  • 使用普通的bash/javascript/python函数轻松创建LLM工具和代理。

  • hkr04
  • 轻巧的C ++ MCP(模型上下文协议)SDK

  • RockChinQ
  • 😎简单易用、🧩丰富生态 -大模型原生即时通信机器人平台| 适配QQ / 微信(企业微信、个人微信) /飞书 /钉钉 / discord / telegram / slack等平台| 支持chatgpt,deepseek,dify,claude,基于LLM的即时消息机器人平台,支持Discord,Telegram,微信,Lark,Dingtalk,QQ,Slack

  • dmayboroda
  • 带有可配置容器的本地对话抹布

  • paulwing
  • 使用MCP服务创建的测试存储库

    Reviews

    1 (1)
    Avatar
    user_wWUrXhUO
    2025-04-24

    As a dedicated user of MCP applications, I have found baml-agents by Elijas to be incredibly efficient and user-friendly. This tool greatly enhances my productivity and streamlines my workflow. The intuitive design and robust functionality ensure that I can achieve my tasks with ease. Highly recommend for anyone in need of a reliable agent tool.