Cover image

一组otoroshi插件与LLMS交互

3 years

Works with Finder

9

Github Watches

0

Github Forks

9

Github Stars

Cloud APIM - Otoroshi LLM Extension

Otoroshi LLM Extension introduction

Connect, setup, secure and seamlessly manage LLM models using an Universal/OpenAI compatible API

  • Unified interface: Simplify interactions and minimize integration hassles
  • Use multiple providers: 10+ LLM providers supported right now, a lot more coming
  • Load balancing: Ensure optimal performance by distributing workloads across multiple providers
  • Fallbacks: Automatically switch LLMs during failures to deliver uninterrupted & accurate performance
  • Automatic retries: LLM APIs often have inexplicable failures. You can rescue a substantial number of your requests with our in-built automatic retries feature.
  • Semantic cache: Speed up repeated queries, enhance response times, and reduce costs
  • Custom quotas: Manage LLM tokens quotas per consumer and optimise costs
  • Key vault: securely store your LLM API keys in Otoroshi vault or any other secret vault supported by Otoroshi.
  • Observability and reporting: every LLM request is audited with details about the consumer, the LLM provider and usage. All those audit events are exportable using multiple methods for further reporting
  • Fine grained authorizations: Use Otoroshi advanced fine grained authorizations capabilities to constrains model usage based on whatever you want: user identity, apikey, consumer metadata, request details, etc
  • Prompt Fences: Validate your prompts and prompts responses to avoid sensitive or personal informations leakage, irrelevant or unhelpful responses, gibberish content, etc
  • Prompt engineering: enhance your experience by providing contextual information to your prompts, storing them in a library for reusability, and using prompt templates for increased efficiency

Otoroshi LLM Extension is set of Otoroshi plugins and resources to interact with LLMs. To know more about it, go to documentation

Supported LLM providers

All supported providers are available here

  • Anthropic
  • Azure OpenAI
  • Azure AI Foundry
  • Cloudflare
  • Cohere
  • Gemini
  • Groq
  • Huggingface 🇫🇷 🇪🇺
  • Mistral 🇫🇷 🇪🇺
  • Ollama (Local Models)
  • OpenAI
  • OVH AI Endpoints 🇫🇷 🇪🇺
  • Scaleway 🇫🇷 🇪🇺
  • X.ai
  • Deepseek

Requirements

Run it on JDK17+

相关推荐

  • av
  • 毫不费力地使用一个命令运行LLM后端,API,前端和服务。

  • WangRongsheng
  • 🧑‍🚀 llm 资料总结(数据处理、模型训练、模型部署、 o1 模型、mcp 、小语言模型、视觉语言模型)|摘要世界上最好的LLM资源。

  • 1Panel-dev
  • 🔥1Panel提供了直观的Web接口和MCP服务器,用于在Linux服务器上管理网站,文件,容器,数据库和LLMS。

  • rulego
  • ⛓️Rulego是一种轻巧,高性能,嵌入式,下一代组件编排规则引擎框架。

  • Byaidu
  • PDF科学纸翻译带有保留格式的pdf -基于ai完整保留排版的pdf文档全文双语翻译

  • lasso-security
  • 基于插件的网关,可协调其他MCP,并允许开发人员在IT企业级代理上构建。

  • hkr04
  • 轻巧的C ++ MCP(模型上下文协议)SDK

  • sigoden
  • 使用普通的bash/javascript/python函数轻松创建LLM工具和代理。

  • RockChinQ
  • 😎简单易用、🧩丰富生态 -大模型原生即时通信机器人平台| 适配QQ / 微信(企业微信、个人微信) /飞书 /钉钉 / discord / telegram / slack等平台| 支持chatgpt,deepseek,dify,claude,基于LLM的即时消息机器人平台,支持Discord,Telegram,微信,Lark,Dingtalk,QQ,Slack

    Reviews

    5 (0)