Cover image

El marco de agente de IA más confiable que admite MCP.

3 years

Works with Finder

55

Github Watches

684

Github Forks

7.3k

Github Stars



unclecode%2Fcrawl4ai | Trendshift Made_with_python pypi_downloads

Introduction

Upsonic is a reliability-focused framework designed for real-world applications. It enables trusted agent workflows in your organization through advanced reliability features, including verification layers, triangular architecture, validator agents, and output evaluation systems.

Why Choose Upsonic?

Upsonic is a next-generation framework that makes agents production-ready by solving three critical challenges:

1- Reliability: While other frameworks require expertise and complex coding for reliability features, Upsonic offers easy-to-activate reliability layers without disrupting functionality.

2- Model Context Protocol (MCP): The MCP allows you to leverage tools with various functionalities developed both officially and by third parties without requiring you to build custom tools from scratch.

3- Integrated Browser Use and Computer Use: Directly use and deploy agents that works on non-API systems.

4- Secure Runtime: Isolated environment to run agents

sdk-server


📊 Reliability Layer

LLM output reliability is critical, particularly for numerical operations and action execution. Upsonic addresses this through a multi-layered reliability system, enabling control agents and verification rounds to ensure output accuracy.

Verifier Agent: Validates outputs, tasks, and formats - detecting inconsistencies, numerical errors, and hallucinations

Editor Agent: Works with verifier feedback to revise and refine outputs until they meet quality standards

Rounds: Implements iterative quality improvement through scored verification cycles

Loops: Ensures accuracy through controlled feedback loops at critical reliability checkpoints

Upsonic is a reliability-focused framework. The results in the table were generated with a small dataset. They show success rates in the transformation of JSON keys. No hard-coded changes were made to the frameworks during testing; only the existing features of each framework were activated and run. GPT-4o was used in the tests.

10 transfers were performed for each section. The numbers show the error count. So if it says 7, it means 7 out of 10 were done incorrectly. The table has been created based on initial results. We are expanding the dataset. The tests will become more reliable after creating a larger test set. Reliability benchmark repo

Name Reliability Score % ASIN Code HS Code CIS Code Marketing URL Usage URL Warranty Time Policy Link Policy Description
Upsonic 99.3 0 1 0 0 0 0 0 0
CrewAI 87.5 0 3 2 1 1 0 1 2
Langgraph 6.3 10 10 7 10 8 10 10 10
class ReliabilityLayer:
  prevent_hallucination = 10

agent = Agent("Coder", reliability_layer=ReliabilityLayer, model="openai/gpt4o")

Key features:

  • Production-Ready Scalability: Deploy seamlessly on AWS, GCP, or locally using Docker.
  • Task-Centric Design: Focus on practical task execution, with options for:
    • Basic tasks via LLM calls.
    • Advanced tasks with V1 agents.
    • Complex automation using V2 agents with MCP integration.
  • MCP Server Support: Utilize multi-client processing for high-performance tasks.
  • Tool-Calling Server: Exception-secure tool management with robust server API interactions.
  • Computer Use Integration: Execute human-like tasks using Anthropic’s ‘Computer Use’ capabilities.
  • Easily adding tools: You can add your custom tools and MCP tools with a single line of code.

📙 Documentation

You can access our documentation at docs.upsonic.ai All concepts and examples are available there.


🛠️ Getting Started

Prerequisites

  • Python 3.10 or higher
  • Access to OpenAI or Anthropic API keys (Azure and Bedrock Supported)

Installation

pip install upsonic

Basic Example

Set your OPENAI_API_KEY

export OPENAI_API_KEY=sk-***

Start the agent

from upsonic import Task, Agent

task = Task("Who developed you?")

agent = Agent("Coder")

agent.print_do(task)


Tool Integration via MCP

Upsonic officially supports Model Context Protocol (MCP) and custom tools. You can use hundreds of MCP servers at glama or mcprun We also support Python functions inside a class as a tool. You can easily generate your integrations with that.

from upsonic import Agent, Task, ObjectResponse

# Define Fetch MCP configuration
class FetchMCP:
    command = "uvx"
    args = ["mcp-server-fetch"]

# Create response format for web content
class WebContent(ObjectResponse):
    title: str
    content: str
    summary: str
    word_count: int

# Initialize agent
web_agent = Agent(
    "Web Content Analyzer",
    model="openai/gpt-4o",  # You can use other models
)

# Create a task to analyze a web page
task = Task(
    description="Fetch and analyze the content from url. Extract the main content, title, and create a brief summary.",
    context=["https://upsonic.ai"],
    tools=[FetchMCP],
    response_format=WebContent
)
    
# Usage
result = web_agent.print_do(task)
print(result.title)
print(result.summary)


Agent with Multi-Task Example

Distribute tasks effectively across agents with our automated task distribution mechanism. This tool matches tasks based on the relationship between agent and task, ensuring collaborative problem-solving across agents and tasks. The output is essential for deploying an AI agent across apps or as a service. Upsonic uses Pydantic BaseClass to define structured outputs for tasks, allowing developers to specify exact response formats for their AI agent tasks.

from upsonic import Agent, Task, MultiAgent, ObjectResponse
from upsonic.tools import Search
from typing import List

# Targeted Company and Our Company
our_company = "https://redis.io/"
targeted_url = "https://upsonic.ai/"


# Response formats
class CompanyResearch(ObjectResponse):
   industry: str
   product_focus: str
   company_values: List[str]
   recent_news: List[str]

class Mail(ObjectResponse):
   subject: str
   content: str


# Creating Agents
researcher = Agent(
   "Company Researcher",
   company_url=our_company
)

strategist = Agent(
   "Outreach Strategist", 
   company_url=our_company
)


# Creating Tasks and connect
company_task = Task(
   "Research company website and analyze key information",

   context=[targeted_url],
   tools=[Search],
   response_format=CompanyResearch
)

position_task = Task(
   "Analyze Senior Developer position context and requirements",
   context=[company_task, targeted_url],
)

message_task = Task(
   "Create personalized outreach message using research",
   context=[company_task, position_task, targeted_url],
   response_format=Mail
)


# Run the Tasks over agents
results = MultiAgent.do(
   [researcher, strategist],
   [company_task, position_task, message_task]
)


# Print the results
print(f"Company Industry: {company_task.response.industry}")
print(f"Company Focus: {company_task.response.product_focus}")
print(f"Company Values: {company_task.response.company_values}")
print(f"Company Recent News: {company_task.response.recent_news}")
print(f"Position Analyze: {position_task.response}")
print(f"Outreach Message Subject: {message_task.response.subject}")
print(f"Outreach Message Content: {message_task.response.content}")

Direct LLM Call

Direct LLM calls offer faster, cheaper solutions for simple tasks. In Upsonic, you can make calls to model providers without any abstraction level and organize structured outputs. You can also use tools with LLM calls.

from upsonic import Task, Direct

direct = Direct(model="openai/gpt-4o")

task = Task("Where can I use agents in real life?")

direct.print_do(task)


Cookbook

You can check out many examples showing how to build agents using MCP tools and browser use with Upsonic.


Telemetry

We use anonymous telemetry to collect usage data. We do this to focus our developments on more accurate points. You can disable it by setting the UPSONIC_TELEMETRY environment variable to false.

import os
os.environ["UPSONIC_TELEMETRY"] = "False"


相关推荐

  • GreenPepper Consulting India Private Limited
  • Formal yet conversational interview questions

  • av
  • Ejecute sin esfuerzo LLM Backends, API, frontends y servicios con un solo comando.

  • 1Panel-dev
  • 🔥 1Panel proporciona una interfaz web intuitiva y un servidor MCP para administrar sitios web, archivos, contenedores, bases de datos y LLM en un servidor de Linux.

  • GeyserMC
  • Una biblioteca para la comunicación con un cliente/servidor de Minecraft.

  • awslabs
  • Servidores AWS MCP: servidores MCP especializados que traen las mejores prácticas de AWS directamente a su flujo de trabajo de desarrollo

  • WangRongsheng
  • 🧑‍🚀 全世界最好的 llM 资料总结(数据处理、模型训练、模型部署、 O1 模型、 MCP 、小语言模型、视觉语言模型) | Resumen de los mejores recursos del mundo.

  • GLips
  • Servidor MCP para proporcionar información de diseño de figma a agentes de codificación de IA como Cursor

  • Byaidu
  • Traducción de papel científico en PDF con formatos preservados - 基于 Ai 完整保留排版的 PDF 文档全文双语翻译 , 支持 支持 支持 支持 支持 支持 支持 支持 支持 支持 支持 支持 等服务 等服务 等服务 提供 提供 提供 提供 提供 提供 提供 提供 提供 提供 提供 提供 cli/mcp/docker/zotero

  • n8n-io
  • Plataforma de automatización de flujo de trabajo de código justo con capacidades de IA nativas. Combine el edificio visual con código personalizado, auto-anfitrión o nube, más de 400 integraciones.

  • activepieces
  • AI AGENTES & MCPS & AI AUTOMACIÓN DE FLUILES DE TRABAJO • (280+ Servidores MCP para agentes de IA) • Automatización de IA / Agente de AI con MCPS • Flujos de trabajo de IA y agentes de IA • MCP para agentes de IA

  • rulego
  • ⛓️Rulego es un marco de motor de regla de orquestación de componentes de alta generación de alto rendimiento, de alto rendimiento y de alto rendimiento para GO.

    Reviews

    5 (1)
    Avatar
    user_XDlTZWC8
    2025-04-17

    Upsonic has been a game-changer in my productivity toolkit! The seamless integration and user-friendly interface make it a breeze to use. Whether you're a developer or just someone looking to streamline your workflow, Upsonic is an excellent choice. The robust features and consistent updates from the Upsonic team ensure that it stays relevant and effective. Highly recommended!