Confidential guide on numerology and astrology, based of GG33 Public information

labs-ai-tools-pour-devs
Un serveur MCP et un coureur invite pour tout Docker. Marque simple. BYO LLM.
3 years
Works with Finder
6
Github Watches
38
Github Forks
189
Github Stars
This README is an agentic workflow
AI Tools for Developers
Agentic AI workflows enabled by Docker containers.
Just Docker. Just Markdown. BYOLLM.
MCP
Any prompts you write and their tools can now be used as MCP servers
Use serve mode with --mcp
flag. Then, register prompts via git ref or path with --register <ref>
# ...
serve
--mcp
--register github:docker/labs-ai-tools-for-devs?path=prompts/examples/generate_dockerfile.md
--register /Users/ai-overlordz/some/local/prompt.md
# ...
Source for many experiments in our LinkedIn newsletter
What is this?
This is a simple Docker image which enables infinite possibilities for novel workflows by combining Dockerized Tools, Markdown, and the LLM of your choice.
Markdown is the language
Humans already speak it. So do LLM's. This software allows you to write complex workflows in a markdown files, and then run them with your own LLM in your editor or terminal...or any environment, thanks to Docker.
Dockerized Tools
OpenAI API compatiable LLM's already support tool calling. We believe these tools could just be Docker images. Some of the benefits using Docker based on our research are enabling the LLM to:
- take more complex actions
- get more context with fewer tokens
- work across a wider range of environments
- operate in a sandboxed environment
Conversation Loop
The conversation loop is the core of each workflow. Tool results, agent responses, and of course, the markdown prompts, are all passed through the loop. If an agent sees an error, it will try running the tool with different parameters, or even different tools until it gets the right result.
Multi-Model Agents
Each prompt can be configured to be run with different LLM models, or even different model families. This allows you to use the best tool for the job. When you combine these tools, you can create multi-agent workflows where each agent runs with the model best suited for that task.
With Docker, it is possible to have frontier models plan, while lightweight local models execute.
Project-First Design
To get help from an assistant in your software development loop, the only context necessary is the project you are working on.
Extracting project context
An extractor is a Docker image that runs against a project and extracts information into a JSON context.
Prompts as a trackable artifact
Prompts are stored in a git repo and can be versioned, tracked, and shared for anyone to run in their own environment.
Get Started
We highly recommend using the VSCode extension to get started. It will help you create prompts, and run them with your own LLM.
Running your first loop
VSCode
Install Extension
Get the latest release and install with
code --install-extension 'labs-ai-tools-vscode-<version>.vsix'
Running:
- Open an existing markdown file, or create a new markdown file in VSCode.
You can even run this markdown file directly!
-
Run command
>Docker AI: Set OpenAI API Key
to set an OpenAI API key, or use a dummy value for local models. -
Run command
>Docker AI: Select target project
to select a project to run the prompt against. -
Run command
>Docker AI: Run Prompt
to start the conversation loop.
CLI
Instructions assume you have a terminal open, and Docker Desktop running.
- Set OpenAI key
echo $OPENAI_API_KEY > $HOME/.openai-api-key
Note: we assume this file exists, so you must set a dummy value for local models.
- Run the container in your project directory
docker run
--rm \
--pull=always \
-it \
-v /var/run/docker.sock:/var/run/docker.sock \
--mount type=volume,source=docker-prompts,target=/prompts \
--mount type=bind,source=$HOME/.openai-api-key,target=/root/.openai-api-key \
vonwig/prompts:latest \
run \
--host-dir $PWD \
--user $USER \
--platform "$(uname -o)" \
--prompts "github:docker/labs-githooks?ref=main&path=prompts/git_hooks"
See docs for more details on how to run the conversation loop.
Building
#docker:command=build
docker build -t vonwig/prompts:local -f Dockerfile .
Now, for the agentic workflow...
prompt system
You are an expert at reading readmes.
Use curl to get the readme for https://github.com/docker/labs-ai-tools-for-devs before answering the following questions.
prompt user
What is this project?
相关推荐
Advanced software engineer GPT that excels through nailing the basics.
I find academic articles and books for research and literature reviews.
Embark on a thrilling diplomatic quest across a galaxy on the brink of war. Navigate complex politics and alien cultures to forge peace and avert catastrophe in this immersive interstellar adventure.
Découvrez la collection la plus complète et la plus à jour de serveurs MCP sur le marché. Ce référentiel sert de centre centralisé, offrant un vaste catalogue de serveurs MCP open-source et propriétaires, avec des fonctionnalités, des liens de documentation et des contributeurs.
Manipulation basée sur Micropython I2C de l'exposition GPIO de la série MCP, dérivée d'Adafruit_MCP230XX
La communauté du curseur et de la planche à voile, recherchez des règles et des MCP
🔥 1Panel fournit une interface Web intuitive et un serveur MCP pour gérer des sites Web, des fichiers, des conteneurs, des bases de données et des LLM sur un serveur Linux.
Un puissant plugin Neovim pour gérer les serveurs MCP (Protocole de contexte modèle)
MCP Server pour récupérer le contenu de la page Web à l'aide du navigateur sans tête du dramwright.
L'application tout-en-un desktop et Docker AI avec chiffon intégré, agents AI, constructeur d'agent sans code, compatibilité MCP, etc.
Pont entre les serveurs Olllama et MCP, permettant aux LLM locaux d'utiliser des outils de protocole de contexte de modèle
Reviews

user_tJMzVHVJ
Labs-ai-tools-for-devs by Docker is a fantastic resource for developers aiming to integrate AI into their projects. The comprehensive tools and libraries provided are user-friendly and efficient, making it a must-have for modern development. With excellent documentation and support, it streamlines AI implementation effortlessly. Highly recommend checking it out!