Semantic Memory Search
Overview
Enhance OpenClaw's markdown-based memory system with vector-powered semantic search. Search memories by meaning and intent rather than exact keywords. Ask "what caching solution did we pick?" and get relevant decisions even when the word "caching" doesn't appear. File watcher auto-syncs index when memories change. Supports multiple embedding providers including fully local options.
Benefits
- Semantic search finds content by meaning, not just keywords
- SHA-256 hashing prevents redundant API calls during reindex
- File watcher enables automatic index updates
- Hybrid search combines semantic vectors with keyword matching
Requirements
- Python 3.10 or higher with pip or uv package manager
- memsearch tool (pip install memsearch)
- Optional: API credentials for embedding providers
- OpenClaw markdown memory directory
Technical Details
Uses memsearch tool with vector embeddings for semantic search. Supports OpenAI, Google, Voyage, Ollama, and fully local embeddings. Implements Reciprocal Rank Fusion to combine semantic (dense vectors) with BM25 keyword matching. Markdown files remain primary source; vector index is derived cache that can be rebuilt. SHA-256 content hashing identifies unchanged files.
Ready to deploy this on your infrastructure?
Book a Call