Cover image
Pnecone-Vektor-DB-MCP-Server
Public

Pnecone-Vektor-DB-MCP-Server

Try Now
2025-04-08

3 years

Works with Finder

1

Github Watches

0

Github Forks

0

Github Stars

MCP Pinecone Vector Database Server

This project implements a Model Context Protocol (MCP) server that allows reading and writing vectorized information to a Pinecone vector database. It's designed to work with both RAG-processed PDF data and Confluence data.

Features

  • Search for similar documents using text queries
  • Add new vectors to the database with custom metadata
  • Process and upload Confluence data in batch
  • Delete vectors by ID
  • Basic database statistics (temporarily disabled)

Prerequisites

  • Bun runtime
  • Pinecone API key
  • OpenAI API key (for generating embeddings)

Installation

  1. Clone this repository

  2. Install dependencies:

    bun install
    
  3. Create a .env file with the following content:

    PINECONE_API_KEY=your-pinecone-api-key
    OPENAI_API_KEY=your-openai-api-key
    PINECONE_HOST=your-pinecone-host
    PINECONE_INDEX_NAME=your-index-name
    DEFAULT_NAMESPACE=your-namespace
    

Usage

Running the MCP Server

Start the server:

bun src/index.ts

The server will start and listen for MCP commands via stdio.

Running the Example Client

Test the server with the example client:

bun examples/client.ts

Processing Confluence Data

The Confluence processing script provides detailed logging and verification:

bun src/scripts/process-confluence.ts <file-path> [collection] [scope]

Parameters:

  • file-path: Path to your Confluence JSON file (required)
  • collection: Document collection name (defaults to "documentation")
  • scope: Document scope (defaults to "documentation")

Example:

bun src/scripts/process-confluence.ts ./data/confluence-export.json "tech-docs" "engineering"

The script will:

  1. Validate input parameters
  2. Process and vectorize the content
  3. Upload vectors in batches
  4. Verify successful upload
  5. Provide detailed logs of the process

Available Tools

The server provides the following tools:

  1. search-vectors - Search for similar documents with parameters:

    • query: string (search query text)
    • topK: number (1-100, default: 5)
    • filter: object (optional filter criteria)
  2. add-vector - Add a single document with parameters:

    • text: string (content to vectorize)
    • metadata: object (vector metadata)
    • id: string (optional custom ID)
  3. process-confluence - Process Confluence JSON data with parameters:

    • filePath: string (path to JSON file)
    • namespace: string (optional, defaults to "capella-document-search")
  4. delete-vectors - Delete vectors with parameters:

    • ids: string[] (list of vector IDs)
    • namespace: string (optional, defaults to "capella-document-search")
  5. get-stats - Get database statistics (temporarily disabled)

Database Configuration

The server requires a Pinecone vector database. Configure the connection details in your .env file:

PINECONE_API_KEY=your-api-key
PINECONE_HOST=your-host
PINECONE_INDEX_NAME=your-index
DEFAULT_NAMESPACE=your-namespace

Metadata Schema

Confluence Documents

ID: confluence-[page-id]-[item-id]
title: [title]
pageId: [page-id]
spaceKey: [space-key]
type: [type]
content: [text-content]
author: [author-name]
source: "confluence"
collection: "documentation"
scope: "documentation"
...

Contributing

  1. Fork the repository
  2. Create your feature branch: git checkout -b feature/my-new-feature
  3. Commit your changes: git commit -am 'Add some feature'
  4. Push to the branch: git push origin feature/my-new-feature
  5. Submit a pull request

License

MIT

相关推荐

  • Joshua Armstrong
  • Confidential guide on numerology and astrology, based of GG33 Public information

  • https://suefel.com
  • Latest advice and best practices for custom GPT development.

  • Emmet Halm
  • Converts Figma frames into front-end code for various mobile frameworks.

  • Elijah Ng Shi Yi
  • Advanced software engineer GPT that excels through nailing the basics.

  • lumpenspace
  • Take an adjectivised noun, and create images making it progressively more adjective!

  • https://maiplestudio.com
  • Find Exhibitors, Speakers and more

  • tomoyoshi hirata
  • Sony α7IIIマニュアルアシスタント

  • Yusuf Emre Yeşilyurt
  • I find academic articles and books for research and literature reviews.

  • Carlos Ferrin
  • Encuentra películas y series en plataformas de streaming.

  • apappascs
  • Entdecken Sie die umfassendste und aktuellste Sammlung von MCP-Servern auf dem Markt. Dieses Repository dient als zentraler Hub und bietet einen umfangreichen Katalog von Open-Source- und Proprietary MCP-Servern mit Funktionen, Dokumentationslinks und Mitwirkenden.

  • jae-jae
  • MCP -Server für den Fetch -Webseiteninhalt mit dem Headless -Browser von Dramatikern.

  • ravitemer
  • Ein leistungsstarkes Neovim -Plugin für die Verwaltung von MCP -Servern (Modellkontextprotokoll)

  • patruff
  • Brücke zwischen Ollama und MCP -Servern und ermöglicht es lokalen LLMs, Modellkontextprotokoll -Tools zu verwenden

  • pontusab
  • Die Cursor & Windsurf -Community finden Regeln und MCPs

  • av
  • Führen Sie mühelos LLM -Backends, APIs, Frontends und Dienste mit einem Befehl aus.

  • Mintplex-Labs
  • Die All-in-One-Desktop & Docker-AI-Anwendung mit integriertem Lappen, AI-Agenten, No-Code-Agent Builder, MCP-Kompatibilität und vielem mehr.

  • WangRongsheng
  • 🧑‍🚀 全世界最好的 llm 资料总结(数据处理、模型训练、模型部署、 O1 模型、 MCP 、小语言模型、视觉语言模型) | Zusammenfassung der weltbesten LLM -Ressourcen.

  • appcypher
  • Awesome MCP -Server - eine kuratierte Liste von Modellkontext -Protokollservern für Modellkontext

    Reviews

    3 (1)
    Avatar
    user_uMh5nin9
    2025-04-17

    I recently started using the pinecone-vector-db-mcp-server and it has been a game changer for managing my vector databases. The seamless integration and efficient performance provided by zx8086's solution is commendable. Highly recommend it to anyone looking for a robust vector DB management tool! Check it out at GitHub.