
Agentic-Workflow-MCP
Lokaler Agenten Workflow MCP Server
3 years
Works with Finder
0
Github Watches
0
Github Forks
0
Github Stars
Local Agentic Workflow MCP Server
This MCP server allows you to run the Agentic Workflows using a local LLM server using Ollama CLI. This has been tested for VS Code
. The workflows are designed as follows:
Parallel Agentic Workflow
-
Orchestrator Agent
: This agent is the first to be called. It is responsible for orchestrating the workflow and calling the other agents as needed, depending on the user prompt. -
Worker Agents
: The orchestrator agent calls these agents to perform specific tasks. The Orchestrator Agent calls these agents in parallel and waits for their responses before proceeding to the next step. -
Aggregator Agent
: This agent is called last to aggregate the results from the other agents and return the final result to Github Copilot.
Sequential Agentic Workflow
- All the agents are called sequentially, and the output of one agent is passed as input to the next agent. The final result is returned to Github Copilot. These agents are called in the order they are defined in the config file.
Prerequisites
- Python 3.8 or higher
- pip (for installing Python packages)
- Ollama CLI (for local LLMs)
Installation
-
Clone the repository:
git clone https://github.com/Antarpreet/agentic-workflow-mcp.git
-
Start an LLM server using the ollama CLI. For example, to start the
deepseek-r1:14b
model, run:ollama run deepseek-r1:14b
This will start the LLM server on
http://localhost:11434
by default. -
Install the required Python packages:
pip install -r requirements.txt
-
Add MCP Server to VS Code:
- Open the Command Palette (Ctrl+Shift+P or Cmd+Shift+P on macOS).
- Type MCP and select
MCP: Add MCP Server
. - Select
Command (stdio)
as the server type. - Enter the command to start the MCP server.
python -m uv run mcp run agentic-workflow-mcp/server.py
- Name the server:
Agentic Workflow
. - Select
User settings
to add the server to all workspaces orWorkspace settings
to add it to the current workspace only. - If you already have other workspaces open, restart VS Code for changes to take effect.
- This will open the
settings.json
file with the new MCP server configuration, which should look like this:
"mcp": { "servers": { "Agentic Workflow": { "type": "stdio", "command": "python", "args": [ "-m", "uv", "run", "mcp", "run", "agentic-workflow-mcp/server.py" ] } } }
-
Add Config for the MCP Server as follows:
-
Adjust the default config in
config.json
as needed. The config settings are detailed further below. -
Copy the server folder to the user folder. (
C:\Users\<username>\agentic-workflow-mcp
on Windows or~/agentic-workflow-mcp
on Linux/MacOS).Windows:
xcopy /E /I agentic-workflow-mcp %homedrive%%homepath%\agentic-workflow-mcp
Mac/Linux:
rm -rf ~/agentic-workflow-mcp cp -r agentic-workflow-mcp ~/agentic-workflow-mcp
-
Anytime you make any changes to these files, copy them to the user folder again and restart the MCP server in the
settings.json
file for the changes to take effect.
-
-
Start the MCP server:
- Click the start button above the MCP server configuration in the
settings.json
file. - This will start the MCP server; you can see the logs in the Output panel under
MCP: Agentic Workflow
by clicking either theRunning
orError
button above the MCP server configuration.
- Click the start button above the MCP server configuration in the
-
Start using the MCP server:
- Open GitHub Copilot in VS Code and switch to
Agent
mode. - You should see the
Agentic Workflow
MCP server andstart_workflow
tool in the Copilot tools panel. - You can now use the
start_workflow
tool to start the Agentic Workflow. Prompt example:
Use MCP Tools to start a workflow to YOUR_PROMPT_HERE.
- Open GitHub Copilot in VS Code and switch to
Config Settings
The config settings are defined as follows
Key | Type | Description | Required | Defaults |
---|---|---|---|---|
default_model |
string | The default model to use for the LLM server. | true |
deepseek-r1:14b |
url |
string | The URL of the Ollama LLM server. | true |
http://localhost:11434 |
verbose |
boolean | Whether to enable verbose logging. | false |
false |
parallel |
boolean | Whether to run the agents in parallel or sequentially. | false |
true |
base_path |
string | The absolute base path for your repo. Required if you are using file_extractor agent. |
false |
"" |
excluded_agents |
string[] | The agents to exclude from the workflow. | false |
[] |
default_output_format |
object | The default output format for the LLM server. | false |
{"type": "object", "properties": {"response": {"type": "string"}}, "required": ["response"]} |
agents |
object[] | The agents used in the workflow. | false |
Agents |
Agents are defined as follows
Key | Type | Description | Required | Defaults |
---|---|---|---|---|
name |
string | The name of the agent. | true |
Orchestrator Agent |
description |
string | The description of the agent. | true |
This agent is responsible for orchestrating the workflow and calling the other agents as needed, depending on the user prompt. |
model |
string | The model to use for the agent. | false |
deepseek-r1:14b |
pass_to_next |
boolean | Whether to pass the output of this agent to the next agent. | false |
false |
agents_to_use |
string[] | The agents to use for the current agent before processing the current agent prompt. | false |
[] |
prompt |
string | The prompt to use for the agent. This takes precedence over prompt_file . |
true |
You are an agent that is responsible for orchestrating the workflow and calling the other agents as needed, depending on the user prompt. You will call the other agents in parallel and wait for their responses before proceeding to the next step. You will also call the Aggregator Agent in the end to aggregate the results from the other agents and return the final result to Github Copilot. |
prompt_file |
string | Either the absolute path to the prompt file or path to the prompt file in the format agentic-workflow-mcp/YOUR_PROMPT_FILE_NAME if the prompt file is added to the agentic-workflow-mcp in this repo. |
false |
"" |
output_format |
object | The output format for the agent. | false |
{"type": "object", "properties": {"response": {"type": "string"}}, "required": ["response"]} |
Custom Embeddings
Local LLM Tools
GitHub Copilot Tools
start_workflow
This tool is used to start the Agentic Workflow. It takes a prompt as input and returns the result of the workflow.
Example usage (Parallel Agentic Workflow)
The example configuration contains agents that get information regarding a country. There are three agents besides the Orchestrator Agent
and Aggregator Agent
: Country
, Flag
, and Language
. Depending on your prompt, each of them will return their responses, and the combined responses are sent to the Aggregator Agent
, which will return the final result.
There is also a prompt.txt
file, which contains the prompt for the Orchestrator Agent
as an example to show how to use a file instead of a string for the prompt. The prompt.txt
file is used in the prompt_file
key of the Orchestrator Agent
in config.json
.
Test Prompt: Use MCP tools to start workflow to confirm whether France is real or not
Troubleshooting
-
If the MCP server is not making any requests to the LLM server, do the following:
- Restart VS Code as a sanity check.
- Sometimes the Ollama server doesn't respond if it hasn't been used for a while. In that case, open the ollama URL (
http://localhost:11434
) in a browser. You should see the messageOllama is running
. If you don't see this message, restart the Ollama server using the commandollama run deepseek-r1:14b
. - Copy the server files again using the commands above, another sanity check.
- If your request to the local LLM is failing or taking too long, it might be a good idea to increase the read timeout in the
server.py
file. - Restart the MCP server in the
settings.json
file. - Create a new chat in GitHub Copilot.
-
To see the internal logs, you can set the
verbose
key in theconfig.json
file totrue
. This will include all the logs in the response. You can see these by expanding thestart_workflow
tool in the Copilot chat window. -
You can also check the logs using:
Windows:
type %homedrive%%homepath%\agentic-workflow-mcp\logs.txt
Mac/Linux:
tail -f ~/agentic-workflow-mcp/logs.txt
TODO
- Add support for multi-modality using local LLM servers.
相关推荐
🔥 1Panel bietet eine intuitive Weboberfläche und einen MCP -Server, um Websites, Dateien, Container, Datenbanken und LLMs auf einem Linux -Server zu verwalten.
🧑🚀 全世界最好的 llm 资料总结(数据处理、模型训练、模型部署、 O1 模型、 MCP 、小语言模型、视觉语言模型) | Zusammenfassung der weltbesten LLM -Ressourcen.
⛓️Rugele ist ein leichter, leistungsstarker, leistungsstarker, eingebetteter Komponenten-Orchestrierungsregel-Motor-Rahmen für GO.
PDF wissenschaftliche Papierübersetzung mit erhaltenen Formaten - 基于 ai 完整保留排版的 pdf 文档全文双语翻译 , 支持 支持 支持 支持 google/deeptl/ollama/openai 等服务 提供 cli/gui/mcp/docker/zotero
Erstellen Sie einfach LLM -Tools und -Argarten mit einfachen Bash/JavaScript/Python -Funktionen.
😎简单易用、🧩丰富生态 - 大模型原生即时通信机器人平台 | 适配 qq / 微信(企业微信、个人微信) / 飞书 / 钉钉 / diskord / telegram / slack 等平台 | 支持 Chatgpt 、 Deepseek 、 Diffy 、 Claude 、 Gemini 、 xai 、 ppio 、 、 ulama 、 lm Studio 、阿里云百炼、火山方舟、 siliconflow 、 qwen 、 mondshot 、 chatglm 、 sillytraven 、 mcp 等 llm 的机器人 / agent | LLM-basierte Instant Messaging Bots-Plattform, unterstützt Zwietracht, Telegramm, Wechat, Lark, Dingtalk, QQ, Slack
Reviews

user_AfcQZeaD
I've been using agentic-workflow-mcp by Antarpreet, and it's a game-changer for workflow automation. The seamless integration and intuitive design make it an essential tool for any professional looking to boost productivity. Highly recommend it!

user_YJF2iRZa
Agentic-workflow-mcp by Antarpreet is a phenomenal tool. It streamlines workflow processes with remarkable efficiency, making complex tasks simpler and more manageable. The user-friendly interface and comprehensive features are highly commendable. Overall, it’s a must-have for anyone looking to optimize their workflow management. Highly recommend!

user_69UPUzTh
The agentic-workflow-mcp by Antarpreet has significantly improved my productivity and organization. Its intuitive design and powerful features make managing workflows seamless. I highly recommend it to anyone looking to streamline their processes and boost efficiency.

user_zk9ifew6
I've been using agentic-workflow-mcp by Antarpreet for a while, and it's truly transformed my productivity. The intuitive interface and seamless workflow integration make managing tasks a breeze. Highly recommend this for anyone looking to optimize their work processes!

user_9yR0MK2v
Agentic-workflow-mcp by Antarpreet is a game-changer for workflow automation. Its intuitive interface and robust features streamline processes like never before. I've seen a significant boost in productivity and efficiency since adopting this tool. Highly recommend it for anyone looking to optimize their workflow operations.

user_1X5bH3oU
The agentic-workflow-mcp by Antarpreet is a remarkable tool that streamlines workflow management effortlessly. It's intuitive and user-friendly, allowing me to optimize my productivity effectively. Highly recommended for anyone looking to enhance their operational efficiency.

user_Y8zGCt7f
Agentic-Workflow-MCP is an exceptional tool created by Antarpreet. It has drastically improved my productivity by optimizing my workflow. The user interface is intuitive and easy to navigate, making it a pleasure to use every day. Highly recommend for anyone looking to streamline their work processes efficiently!

user_6LiVAqxC
Agentic-workflow-mcp has significantly streamlined our project management processes. Created by Antarpreet, this application provides intuitive workflows and efficient task management solutions. It's user-friendly and has enhanced our team's productivity remarkably. Highly recommend to anyone looking for an effective workflow management tool!

user_8Effywoi
Agentic-workflow-mcp is an exceptional tool designed by Antarpreet. It significantly streamlines complex workflows, making project management more efficient and organized. As a loyal user, I appreciate its intuitive interface and robust features that enhance productivity. Highly recommended for any team seeking to optimize their workflow processes!