
pinecone-vector-db-mcp-server
1
Github Watches
0
Github Forks
0
Github Stars
MCP Pinecone Vector Database Server
This project implements a Model Context Protocol (MCP) server that allows reading and writing vectorized information to a Pinecone vector database. It's designed to work with both RAG-processed PDF data and Confluence data.
Features
- Search for similar documents using text queries
- Add new vectors to the database with custom metadata
- Process and upload Confluence data in batch
- Delete vectors by ID
- Basic database statistics (temporarily disabled)
Prerequisites
- Bun runtime
- Pinecone API key
- OpenAI API key (for generating embeddings)
Installation
-
Clone this repository
-
Install dependencies:
bun install
-
Create a
.env
file with the following content:PINECONE_API_KEY=your-pinecone-api-key OPENAI_API_KEY=your-openai-api-key PINECONE_HOST=your-pinecone-host PINECONE_INDEX_NAME=your-index-name DEFAULT_NAMESPACE=your-namespace
Usage
Running the MCP Server
Start the server:
bun src/index.ts
The server will start and listen for MCP commands via stdio.
Running the Example Client
Test the server with the example client:
bun examples/client.ts
Processing Confluence Data
The Confluence processing script provides detailed logging and verification:
bun src/scripts/process-confluence.ts <file-path> [collection] [scope]
Parameters:
-
file-path
: Path to your Confluence JSON file (required) -
collection
: Document collection name (defaults to "documentation") -
scope
: Document scope (defaults to "documentation")
Example:
bun src/scripts/process-confluence.ts ./data/confluence-export.json "tech-docs" "engineering"
The script will:
- Validate input parameters
- Process and vectorize the content
- Upload vectors in batches
- Verify successful upload
- Provide detailed logs of the process
Available Tools
The server provides the following tools:
-
search-vectors
- Search for similar documents with parameters:- query: string (search query text)
- topK: number (1-100, default: 5)
- filter: object (optional filter criteria)
-
add-vector
- Add a single document with parameters:- text: string (content to vectorize)
- metadata: object (vector metadata)
- id: string (optional custom ID)
-
process-confluence
- Process Confluence JSON data with parameters:- filePath: string (path to JSON file)
- namespace: string (optional, defaults to "capella-document-search")
-
delete-vectors
- Delete vectors with parameters:- ids: string[] (list of vector IDs)
- namespace: string (optional, defaults to "capella-document-search")
-
get-stats
- Get database statistics (temporarily disabled)
Database Configuration
The server requires a Pinecone vector database. Configure the connection details in your .env
file:
PINECONE_API_KEY=your-api-key
PINECONE_HOST=your-host
PINECONE_INDEX_NAME=your-index
DEFAULT_NAMESPACE=your-namespace
Metadata Schema
Confluence Documents
ID: confluence-[page-id]-[item-id]
title: [title]
pageId: [page-id]
spaceKey: [space-key]
type: [type]
content: [text-content]
author: [author-name]
source: "confluence"
collection: "documentation"
scope: "documentation"
...
Contributing
- Fork the repository
- Create your feature branch:
git checkout -b feature/my-new-feature
- Commit your changes:
git commit -am 'Add some feature'
- Push to the branch:
git push origin feature/my-new-feature
- Submit a pull request
License
MIT
相关推荐
I find academic articles and books for research and literature reviews.
Confidential guide on numerology and astrology, based of GG33 Public information
Emulating Dr. Jordan B. Peterson's style in providing life advice and insights.
Your go-to expert in the Rust ecosystem, specializing in precise code interpretation, up-to-date crate version checking, and in-depth source code analysis. I offer accurate, context-aware insights for all your Rust programming questions.
Advanced software engineer GPT that excels through nailing the basics.
Take an adjectivised noun, and create images making it progressively more adjective!
Découvrez la collection la plus complète et la plus à jour de serveurs MCP sur le marché. Ce référentiel sert de centre centralisé, offrant un vaste catalogue de serveurs MCP open-source et propriétaires, avec des fonctionnalités, des liens de documentation et des contributeurs.
L'application tout-en-un desktop et Docker AI avec chiffon intégré, agents AI, constructeur d'agent sans code, compatibilité MCP, etc.
Manipulation basée sur Micropython I2C de l'exposition GPIO de la série MCP, dérivée d'Adafruit_MCP230XX
Plateforme d'automatisation de workflow à code équitable avec des capacités d'IA natives. Combinez le bâtiment visuel avec du code personnalisé, de l'auto-hôte ou du cloud, 400+ intégrations.
🧑🚀 全世界最好的 LLM 资料总结 (数据处理、模型训练、模型部署、 O1 模型、 MCP 、小语言模型、视觉语言模型) | Résumé des meilleures ressources LLM du monde.
Une liste organisée des serveurs de protocole de contexte de modèle (MCP)
Reviews

user_uMh5nin9
I recently started using the pinecone-vector-db-mcp-server and it has been a game changer for managing my vector databases. The seamless integration and efficient performance provided by zx8086's solution is commendable. Highly recommend it to anyone looking for a robust vector DB management tool! Check it out at GitHub.