Cover image
Try Now
6 天前

使用普通的bash/javascript/python函数轻松创建LLM工具和代理。

3 years

Works with Finder

504

Github Watches

74

Github Forks

504

Github Stars

LLM Functions

This project empowers you to effortlessly build powerful LLM tools and agents using familiar languages like Bash, JavaScript, and Python.

Forget complex integrations, harness the power of function calling to connect your LLMs directly to custom code and unlock a world of possibilities. Execute system commands, process data, interact with APIs – the only limit is your imagination.

Tools Showcase llm-function-tool

Agents showcase llm-function-agent

Prerequisites

Make sure you have the following tools installed:

  • argc: A bash command-line framework and command runner
  • jq: A JSON processor

Getting Started with AIChat

Currently, AIChat is the only CLI tool that supports llm-functions. We look forward to more tools supporting llm-functions.

1. Clone the repository

git clone https://github.com/sigoden/llm-functions
cd llm-functions

2. Build tools and agents

I. Create a ./tools.txt file with each tool filename on a new line.

get_current_weather.sh
execute_command.sh
#execute_py_code.py
Where is the web_search tool?

The web_search tool itself doesn't exist directly, Instead, you can choose from a variety of web search tools.

To use one as the web_search tool, follow these steps:

  1. Choose a Tool: Available tools include:

    • web_search_cohere.sh
    • web_search_perplexity.sh
    • web_search_tavily.sh
    • web_search_vertexai.sh
  2. Link Your Choice: Use the argc command to link your chosen tool as web_search. For example, to use web_search_perplexity.sh:

    $ argc link-web-search web_search_perplexity.sh
    

    This command creates a symbolic link, making web_search.sh point to your selected web_search_perplexity.sh tool.

Now there is a web_search.sh ready to be added to your ./tools.txt.

II. Create a ./agents.txt file with each agent name on a new line.

coder
todo

III. Build bin and functions.json

argc build

IV. Ensure that everything is ready (environment variables, Node/Python dependencies, mcp-bridge server)

argc check

3. Link LLM-functions and AIChat

AIChat expects LLM-functions to be placed in AIChat's functions_dir so that AIChat can use the tools and agents that LLM-functions provides.

You can symlink this repository directory to AIChat's functions_dir with:

ln -s "$(pwd)" "$(aichat --info | sed -n 's/^functions_dir\s\+//p')"
# OR
argc link-to-aichat

Alternatively, you can tell AIChat where the LLM-functions directory is by using an environment variable:

export AICHAT_FUNCTIONS_DIR="$(pwd)"

4. Start using the functions

Done! Now you can use the tools and agents with AIChat.

aichat --role %functions% what is the weather in Paris?
aichat --agent todo list all my todos

Writing Your Own Tools

Building tools for our platform is remarkably straightforward. You can leverage your existing programming knowledge, as tools are essentially just functions written in your preferred language.

LLM Functions automatically generates the JSON declarations for the tools based on comments. Refer to ./tools/demo_tool.{sh,js,py} for examples of how to use comments for autogeneration of declarations.

Bash

Create a new bashscript in the ./tools/ directory (.e.g. execute_command.sh).

#!/usr/bin/env bash
set -e

# @describe Execute the shell command.
# @option --command! The command to execute.

main() {
    eval "$argc_command" >> "$LLM_OUTPUT"
}

eval "$(argc --argc-eval "$0" "$@")"

Javascript

Create a new javascript in the ./tools/ directory (.e.g. execute_js_code.js).

/**
 * Execute the javascript code in node.js.
 * @typedef {Object} Args
 * @property {string} code - Javascript code to execute, such as `console.log("hello world")`
 * @param {Args} args
 */
exports.run = function ({ code }) {
  eval(code);
}

Python

Create a new python script in the ./tools/ directory (e.g. execute_py_code.py).

def run(code: str):
    """Execute the python code.
    Args:
        code: Python code to execute, such as `print("hello world")`
    """
    exec(code)

Writing Your Own Agents

Agent = Prompt + Tools (Function Calling) + Documents (RAG), which is equivalent to OpenAI's GPTs.

The agent has the following folder structure:

└── agents
    └── myagent
        ├── functions.json                  # JSON declarations for functions (Auto-generated)
        ├── index.yaml                      # Agent definition
        ├── tools.txt                       # Shared tools
        └── tools.{sh,js,py}                # Agent tools 

The agent definition file (index.yaml) defines crucial aspects of your agent:

name: TestAgent                             
description: This is test agent
version: 0.1.0
instructions: You are a test ai agent to ... 
conversation_starters:
  - What can you do?
variables:
  - name: foo
    description: This is a foo
documents:
  - local-file.txt
  - local-dir/
  - https://example.com/remote-file.txt

Refer to ./agents/demo for examples of how to implement a agent.

MCP (Model Context Protocol)

  • mcp/server: Let LLM-Functions tools/agents be used through the Model Context Protocol.
  • mcp/bridge: Let external MCP tools be used by LLM-Functions.

Documents

License

The project is under the MIT License, Refer to the LICENSE file for detailed information.

相关推荐

  • rulego
  • ⛓️Rulego是一种轻巧,高性能,嵌入式,下一代组件编排规则引擎框架。

  • evalstate
  • 定义,提示和测试启用MCP的代理和工作流程

  • kubb-labs
  • 使用API​​的最终工具包。

  • snexus
  • 查询由LLM提供支持的本地文件

  • 0xJacky
  • nginx的另一个webui

  • Tencent
  • 全面,智能,易于使用和轻量级的AI基础架构漏洞发现和MCP服务器安全风险扫描工具。

  • mattzcarey
  • 可扩展的代码审查代理🚢

  • heurist-network
  • 一个灵活的多界面AI代理框架,用于构建推理,工具使用,内存,深度研究,区块链相互作用,MCP和代理 - 服务的框架。

  • mixcore
  • 🚀支持未来的企业Web CMS,支持无头和脱钩方法。在ASP.NET Core/.NET Core上使用可自定义的API构建任何类型的应用程序。完全开源,设计为灵活性。

    Reviews

    1 (1)
    Avatar
    user_3Z7vqcDF
    2025-04-23

    As a dedicated MCP user, I have found sigoden's "llm-functions" to be an indispensable tool. The seamless integration and efficiency of this product streamline complex processes effectively. The intuitive design and functionality are impressive, making it an essential resource for any productivity toolkit. Highly recommended for anyone seeking to enhance their workflow!