
mcp_server_webcrawl
Server MCP adapté à la connexion des données et des archives de robottes Web
1
Github Watches
0
Github Forks
0
Github Stars
mcp-server-webcrawl
Bridge the gap between your web crawl and AI language models using Model Context Protocol (MCP). With mcp-server-webcrawl, your AI client filters and analyzes web content under your direction or autonomously. The server includes a full-text search interface with boolean support, resource filtering by type, HTTP status, and more.
mcp-server-webcrawl provides the LLM a complete menu with which to search your web content, and works with a variety of web crawlers:
mcp-server-webcrawl is free and open source, and requires Claude Desktop, Python (>=3.10). It is installed on the command line, via pip install:
pip install mcp_server_webcrawl
Features
- Claude Desktop ready
- Fulltext search support
- Filter by type, status, and more
- Multi-crawler compatible
- Quick MCP configuration
- ChatGPT support coming soon
MCP Configuration
From the Claude Desktop menu, navigate to File > Settings > Developer. Click Edit Config to locate the configuration file, open in the editor of your choice and modify the example to reflect your datasrc path.
You can set up more mcp-server-webcrawl connections under mcpServers as needed.
{
"mcpServers": {
"webcrawl": {
"command": "mcp-server-webcrawl",
"args": [varies by crawler, see below]
}
}
}
wget (using --mirror)
The datasrc argument should be set to the parent directory of the mirrors.
"args": ["--crawler", "wget", "--datasrc", "/path/to/wget/archives/"]
WARC
The datasrc argument should be set to the parent directory of the WARC files.
"args": ["--crawler", "warc", "--datasrc", "/path/to/warc/archives/"]
InterroBot
The datasrc argument should be set to the direct path to the database.
"args": ["--crawler", "interrobot", "--datasrc", "/path/to/Documents/InterroBot/interrobot.v2.db"]
Katana
The datasrc argument should be set to the parent directory of the text cache files.
"args": ["--crawler", "katana", "--datasrc", "/path/to/katana/archives/"]
SiteOne (using archiving)
The datasrc argument should be set to the parent directory of the archives, archiving must be enabled.
"args": ["--crawler", "siteone", "--datasrc", "/path/to/SiteOne/archives/"]
相关推荐
I find academic articles and books for research and literature reviews.
Confidential guide on numerology and astrology, based of GG33 Public information
Advanced software engineer GPT that excels through nailing the basics.
Take an adjectivised noun, and create images making it progressively more adjective!
Siri Shortcut Finder – your go-to place for discovering amazing Siri Shortcuts with ease
Découvrez la collection la plus complète et la plus à jour de serveurs MCP sur le marché. Ce référentiel sert de centre centralisé, offrant un vaste catalogue de serveurs MCP open-source et propriétaires, avec des fonctionnalités, des liens de documentation et des contributeurs.
Manipulation basée sur Micropython I2C de l'exposition GPIO de la série MCP, dérivée d'Adafruit_MCP230XX
L'application tout-en-un desktop et Docker AI avec chiffon intégré, agents AI, constructeur d'agent sans code, compatibilité MCP, etc.
MCP Server pour récupérer le contenu de la page Web à l'aide du navigateur sans tête du dramwright.
Un puissant plugin Neovim pour gérer les serveurs MCP (Protocole de contexte modèle)
Pont entre les serveurs Olllama et MCP, permettant aux LLM locaux d'utiliser des outils de protocole de contexte de modèle
La communauté du curseur et de la planche à voile, recherchez des règles et des MCP
Reviews

user_Vr7sBXwu
As a dedicated user of mcp_server_webcrawl by pragmar, I find it to be an exceptionally reliable and efficient web crawling tool. The seamless integration and user-friendly interface make it ideal for diverse web scraping tasks. Highly recommend it for anyone looking to streamline their data extraction process. Check it out here: https://github.com/pragmar/mcp_server_webcrawl.