Cover image
Pinecone-vector-DB-MCP-Server
Public

Pinecone-vector-DB-MCP-Server

Try Now
2025-04-08

3 years

Works with Finder

1

Github Watches

0

Github Forks

0

Github Stars

MCP Pinecone Vector Database Server

This project implements a Model Context Protocol (MCP) server that allows reading and writing vectorized information to a Pinecone vector database. It's designed to work with both RAG-processed PDF data and Confluence data.

Features

  • Search for similar documents using text queries
  • Add new vectors to the database with custom metadata
  • Process and upload Confluence data in batch
  • Delete vectors by ID
  • Basic database statistics (temporarily disabled)

Prerequisites

  • Bun runtime
  • Pinecone API key
  • OpenAI API key (for generating embeddings)

Installation

  1. Clone this repository

  2. Install dependencies:

    bun install
    
  3. Create a .env file with the following content:

    PINECONE_API_KEY=your-pinecone-api-key
    OPENAI_API_KEY=your-openai-api-key
    PINECONE_HOST=your-pinecone-host
    PINECONE_INDEX_NAME=your-index-name
    DEFAULT_NAMESPACE=your-namespace
    

Usage

Running the MCP Server

Start the server:

bun src/index.ts

The server will start and listen for MCP commands via stdio.

Running the Example Client

Test the server with the example client:

bun examples/client.ts

Processing Confluence Data

The Confluence processing script provides detailed logging and verification:

bun src/scripts/process-confluence.ts <file-path> [collection] [scope]

Parameters:

  • file-path: Path to your Confluence JSON file (required)
  • collection: Document collection name (defaults to "documentation")
  • scope: Document scope (defaults to "documentation")

Example:

bun src/scripts/process-confluence.ts ./data/confluence-export.json "tech-docs" "engineering"

The script will:

  1. Validate input parameters
  2. Process and vectorize the content
  3. Upload vectors in batches
  4. Verify successful upload
  5. Provide detailed logs of the process

Available Tools

The server provides the following tools:

  1. search-vectors - Search for similar documents with parameters:

    • query: string (search query text)
    • topK: number (1-100, default: 5)
    • filter: object (optional filter criteria)
  2. add-vector - Add a single document with parameters:

    • text: string (content to vectorize)
    • metadata: object (vector metadata)
    • id: string (optional custom ID)
  3. process-confluence - Process Confluence JSON data with parameters:

    • filePath: string (path to JSON file)
    • namespace: string (optional, defaults to "capella-document-search")
  4. delete-vectors - Delete vectors with parameters:

    • ids: string[] (list of vector IDs)
    • namespace: string (optional, defaults to "capella-document-search")
  5. get-stats - Get database statistics (temporarily disabled)

Database Configuration

The server requires a Pinecone vector database. Configure the connection details in your .env file:

PINECONE_API_KEY=your-api-key
PINECONE_HOST=your-host
PINECONE_INDEX_NAME=your-index
DEFAULT_NAMESPACE=your-namespace

Metadata Schema

Confluence Documents

ID: confluence-[page-id]-[item-id]
title: [title]
pageId: [page-id]
spaceKey: [space-key]
type: [type]
content: [text-content]
author: [author-name]
source: "confluence"
collection: "documentation"
scope: "documentation"
...

Contributing

  1. Fork the repository
  2. Create your feature branch: git checkout -b feature/my-new-feature
  3. Commit your changes: git commit -am 'Add some feature'
  4. Push to the branch: git push origin feature/my-new-feature
  5. Submit a pull request

License

MIT

相关推荐

  • Joshua Armstrong
  • Confidential guide on numerology and astrology, based of GG33 Public information

  • https://suefel.com
  • Latest advice and best practices for custom GPT development.

  • Alexandru Strujac
  • Efficient thumbnail creator for YouTube videos

  • Emmet Halm
  • Converts Figma frames into front-end code for various mobile frameworks.

  • Elijah Ng Shi Yi
  • Advanced software engineer GPT that excels through nailing the basics.

  • lumpenspace
  • Take an adjectivised noun, and create images making it progressively more adjective!

  • https://maiplestudio.com
  • Find Exhibitors, Speakers and more

  • Yusuf Emre Yeşilyurt
  • I find academic articles and books for research and literature reviews.

  • Carlos Ferrin
  • Encuentra películas y series en plataformas de streaming.

  • https://zenepic.net
  • Embark on a thrilling diplomatic quest across a galaxy on the brink of war. Navigate complex politics and alien cultures to forge peace and avert catastrophe in this immersive interstellar adventure.

  • apappascs
  • 发现市场上最全面,最新的MCP服务器集合。该存储库充当集中式枢纽,提供了广泛的开源和专有MCP服务器目录,并提供功能,文档链接和贡献者。

  • ShrimpingIt
  • MCP系列GPIO Expander的基于Micropython I2C的操作,源自ADAFRUIT_MCP230XX

  • jae-jae
  • MCP服务器使用剧作《无头浏览器》获取网页内容。

  • ravitemer
  • 一个功能强大的Neovim插件,用于管理MCP(模型上下文协议)服务器

  • patruff
  • Ollama和MCP服务器之间的桥梁,使本地LLMS可以使用模型上下文协议工具

  • pontusab
  • 光标与风浪冲浪社区,查找规则和MCP

  • av
  • 毫不费力地使用一个命令运行LLM后端,API,前端和服务。

  • appcypher
  • 很棒的MCP服务器 - 模型上下文协议服务器的策划列表

  • Mintplex-Labs
  • 带有内置抹布,AI代理,无代理构建器,MCP兼容性等的多合一桌面和Docker AI应用程序。

  • chongdashu
  • 使用模型上下文协议(MCP),启用Cursor,Windsurf和Claude Desktop等AI助手客户,以通过自然语言控制虚幻引擎。

    Reviews

    3 (1)
    Avatar
    user_uMh5nin9
    2025-04-17

    I recently started using the pinecone-vector-db-mcp-server and it has been a game changer for managing my vector databases. The seamless integration and efficient performance provided by zx8086's solution is commendable. Highly recommend it to anyone looking for a robust vector DB management tool! Check it out at GitHub.