When you should use this server

  • Give AI assistants persistent memory across sessions and conversations
  • Store knowledge, facts, or documents in a vector database for later retrieval
  • Retrieve semantic matches to queries instead of relying on exact keyword lookups
  • Manage contextual memory for long-running workflows or applications

Key features

  • Semantic vector storage and retrieval
  • Context-aware memory persistence
  • Efficient similarity search
  • Metadata filtering and payload storage
  • Cross-session memory for AI assistants
  • Compatibility with both cloud and self-hosted deployments

Requirements

  • Hosting: Works with a running Qdrant instance (cloud or self-hosted)
  • Authentication: Standard Qdrant API authentication (if enabled)
  • Collections: Requires specifying a collection name unless a default is configured

Tools provided

qdrant-store

Stores information in the Qdrant vector database with optional metadata. Parameters:
  • information (string, required) — content to store
  • metadata (JSON, optional) — associated metadata to store alongside the vector
  • collection_name (string, required if no default) — collection to store data in
Returns:
  • Confirmation message with vector ID and status

qdrant-find

Retrieves semantically relevant information from Qdrant based on the meaning of the query. Parameters:
  • query (string, required) — text to search for semantically similar content
  • collection_name (string, required if no default) — collection to search
Returns:
  • Matching stored information, ordered by semantic similarity

Notes

  • Best used as a memory backend for AI assistants needing semantic recall
  • Requires an active Qdrant instance; supports both Qdrant Cloud and self-hosted deployments