⛓️Rulego es un marco de motor de regla de orquestación de componentes de alta generación de alto rendimiento, de alto rendimiento y de alto rendimiento para GO.

Funciones LLM
Cree fácilmente herramientas y agentes de LLM utilizando funciones Plain Bash/JavaScript/Python.
3 years
Works with Finder
504
Github Watches
74
Github Forks
504
Github Stars
LLM Functions
This project empowers you to effortlessly build powerful LLM tools and agents using familiar languages like Bash, JavaScript, and Python.
Forget complex integrations, harness the power of function calling to connect your LLMs directly to custom code and unlock a world of possibilities. Execute system commands, process data, interact with APIs – the only limit is your imagination.
Tools Showcase
Agents showcase
Prerequisites
Make sure you have the following tools installed:
Getting Started with AIChat
Currently, AIChat is the only CLI tool that supports llm-functions
. We look forward to more tools supporting llm-functions
.
1. Clone the repository
git clone https://github.com/sigoden/llm-functions
cd llm-functions
2. Build tools and agents
I. Create a ./tools.txt
file with each tool filename on a new line.
get_current_weather.sh
execute_command.sh
#execute_py_code.py
Where is the web_search tool?
The web_search
tool itself doesn't exist directly, Instead, you can choose from a variety of web search tools.
To use one as the web_search
tool, follow these steps:
-
Choose a Tool: Available tools include:
-
web_search_cohere.sh
-
web_search_perplexity.sh
-
web_search_tavily.sh
-
web_search_vertexai.sh
-
-
Link Your Choice: Use the
argc
command to link your chosen tool asweb_search
. For example, to useweb_search_perplexity.sh
:$ argc link-web-search web_search_perplexity.sh
This command creates a symbolic link, making
web_search.sh
point to your selectedweb_search_perplexity.sh
tool.
Now there is a web_search.sh
ready to be added to your ./tools.txt
.
II. Create a ./agents.txt
file with each agent name on a new line.
coder
todo
III. Build bin
and functions.json
argc build
IV. Ensure that everything is ready (environment variables, Node/Python dependencies, mcp-bridge server)
argc check
3. Link LLM-functions and AIChat
AIChat expects LLM-functions to be placed in AIChat's functions_dir so that AIChat can use the tools and agents that LLM-functions provides.
You can symlink this repository directory to AIChat's functions_dir with:
ln -s "$(pwd)" "$(aichat --info | sed -n 's/^functions_dir\s\+//p')"
# OR
argc link-to-aichat
Alternatively, you can tell AIChat where the LLM-functions directory is by using an environment variable:
export AICHAT_FUNCTIONS_DIR="$(pwd)"
4. Start using the functions
Done! Now you can use the tools and agents with AIChat.
aichat --role %functions% what is the weather in Paris?
aichat --agent todo list all my todos
Writing Your Own Tools
Building tools for our platform is remarkably straightforward. You can leverage your existing programming knowledge, as tools are essentially just functions written in your preferred language.
LLM Functions automatically generates the JSON declarations for the tools based on comments. Refer to ./tools/demo_tool.{sh,js,py}
for examples of how to use comments for autogeneration of declarations.
Bash
Create a new bashscript in the ./tools/ directory (.e.g. execute_command.sh
).
#!/usr/bin/env bash
set -e
# @describe Execute the shell command.
# @option --command! The command to execute.
main() {
eval "$argc_command" >> "$LLM_OUTPUT"
}
eval "$(argc --argc-eval "$0" "$@")"
Javascript
Create a new javascript in the ./tools/ directory (.e.g. execute_js_code.js
).
/**
* Execute the javascript code in node.js.
* @typedef {Object} Args
* @property {string} code - Javascript code to execute, such as `console.log("hello world")`
* @param {Args} args
*/
exports.run = function ({ code }) {
eval(code);
}
Python
Create a new python script in the ./tools/ directory (e.g. execute_py_code.py
).
def run(code: str):
"""Execute the python code.
Args:
code: Python code to execute, such as `print("hello world")`
"""
exec(code)
Writing Your Own Agents
Agent = Prompt + Tools (Function Calling) + Documents (RAG), which is equivalent to OpenAI's GPTs.
The agent has the following folder structure:
└── agents
└── myagent
├── functions.json # JSON declarations for functions (Auto-generated)
├── index.yaml # Agent definition
├── tools.txt # Shared tools
└── tools.{sh,js,py} # Agent tools
The agent definition file (index.yaml
) defines crucial aspects of your agent:
name: TestAgent
description: This is test agent
version: 0.1.0
instructions: You are a test ai agent to ...
conversation_starters:
- What can you do?
variables:
- name: foo
description: This is a foo
documents:
- local-file.txt
- local-dir/
- https://example.com/remote-file.txt
Refer to ./agents/demo for examples of how to implement a agent.
MCP (Model Context Protocol)
- mcp/server: Let LLM-Functions tools/agents be used through the Model Context Protocol.
- mcp/bridge: Let external MCP tools be used by LLM-Functions.
Documents
License
The project is under the MIT License, Refer to the LICENSE file for detailed information.
相关推荐
Definir, solicitar y probar agentes y flujos de trabajo habilitados para MCP
Una herramienta de descubrimiento integral, inteligente, fácil de usar y liviano de vulnerabilidad de infraestructura de IA y herramienta de escaneo de riesgo de seguridad del servidor MCP.
Un marco flexible de agentes de IA múltiples para construir agentes con razonamiento, uso de herramientas, memoria, investigación profunda, interacción blockchain, MCP y agentes como servicio.
🚀 Un CMS web empresarial a prueba de futuro que admite enfoques sin cabeza y desacoplados. Cree cualquier tipo de aplicación con API personalizables en ASP.NET Core/.NET Core. De código abierto y diseñado para la flexibilidad.
Reviews

user_3Z7vqcDF
As a dedicated MCP user, I have found sigoden's "llm-functions" to be an indispensable tool. The seamless integration and efficiency of this product streamline complex processes effectively. The intuitive design and functionality are impressive, making it an essential resource for any productivity toolkit. Highly recommended for anyone seeking to enhance their workflow!