A Model Context Protocol (MCP) server that provides semantic search and knowledge graph capabilities for Obsidian vaults using Smart Connections embeddings.
This MCP server allows Claude (and other MCP clients) to:
- Search semantically through your Obsidian notes using pre-computed embeddings
- Find similar notes based on content similarity
- Build connection graphs showing how notes are related
- Query by embedding vectors for advanced use cases
- Access note content with block-level granularity
Uses the embeddings generated by Obsidian's Smart Connections plugin to perform fast, accurate semantic searches across your entire vault. Supports both:
- Query-based search: Uses Ollama to generate embeddings for search queries, enabling true semantic search
- Keyword fallback: Token-based matching when Ollama is unavailable
Builds multi-level connection graphs showing how notes are related through semantic similarity, helping discover hidden relationships in your knowledge base.
Direct access to embedding-based similarity calculations using cosine similarity on 384-dimensional vectors (TaylorAI/bge-micro-v2 model).
Retrieve full note content or specific sections/blocks with intelligent extraction based on Smart Connections block mappings.
- Node.js 18 or higher
- An Obsidian vault with Smart Connections plugin installed and embeddings generated
- Claude Desktop (or another MCP client)
- Optional: Ollama with an embedding model for semantic query search (e.g.,
nomic-embed-text-v2-moe)
-
Clone the repository:
git clone https://github.com/msdanyg/smart-connections-mcp.git cd smart-connections-mcp -
Install dependencies:
npm install
-
Build the TypeScript project:
npm run build
-
Configure Claude Desktop:
Edit your Claude Desktop configuration file:
- macOS:
~/Library/Application Support/Claude/claude_desktop_config.json - Windows:
%APPDATA%\Claude\claude_desktop_config.json
Add the following to the
mcpServerssection:{ "mcpServers": { "smart-connections": { "command": "node", "args": [ "/ABSOLUTE/PATH/TO/smart-connections-mcp/dist/index.js" ], "env": { "SMART_VAULT_PATH": "/ABSOLUTE/PATH/TO/YOUR/OBSIDIAN/VAULT", "OLLAMA_URL": "http://localhost:11434", "OLLAMA_MODEL": "nomic-embed-text-v2-moe:latest", "CACHE_DIR": "/ABSOLUTE/PATH/TO/.smart-env/query-cache" } } } }Important: Replace the paths with your actual paths:
- Update the
argspath to point to your builtindex.jsfile - Update
SMART_VAULT_PATHto your Obsidian vault path
Optional Ollama Configuration (for semantic query search):
OLLAMA_URL: URL of your Ollama instance (default:http://localhost:11434)OLLAMA_MODEL: Embedding model to use (must match your vault embeddings)CACHE_DIR: Directory for query embedding cache (improves performance)
- macOS:
-
Optional: Setup Ollama for Semantic Query Search
To enable true semantic search for text queries (recommended):
# Install Ollama (if not already installed) # Visit: https://ollama.ai # Pull the embedding model that matches your vault # For nomic embeddings (768 dimensions): ollama pull nomic-embed-text-v2-moe:latest # For default Smart Connections (384 dimensions): ollama pull TaylorAI/bge-micro-v2 # Start Ollama (usually runs automatically) ollama serve
Without Ollama, the server will fall back to keyword-based search (less accurate).
-
Restart Claude Desktop
The MCP server will automatically start when Claude Desktop launches.
Find notes semantically similar to a given note.
Parameters:
note_path(string, required): Path to the note (e.g., "Note.md" or "Folder/Note.md")threshold(number, optional): Similarity threshold 0-1, default 0.5limit(number, optional): Maximum results, default 10
Example:
{
"note_path": "MyNote.md",
"threshold": 0.7,
"limit": 5
}Returns:
[
{
"path": "RelatedNote.md",
"similarity": 0.85,
"blocks": ["#Overview", "#Key Points", "#Details"]
}
]Build a multi-level connection graph showing how notes are semantically connected.
Parameters:
note_path(string, required): Starting note pathdepth(number, optional): Graph depth (levels), default 2threshold(number, optional): Similarity threshold 0-1, default 0.6max_per_level(number, optional): Max connections per level, default 5
Example:
{
"note_path": "MyNote.md",
"depth": 2,
"threshold": 0.7
}Returns:
{
"path": "MyNote.md",
"depth": 0,
"similarity": 1.0,
"connections": [
{
"path": "RelatedNote.md",
"depth": 1,
"similarity": 0.82,
"connections": [...]
}
]
}Search notes using a text query. Uses semantic search via Ollama if available, falls back to keyword matching otherwise.
Parameters:
query(string, required): Search query textlimit(number, optional): Maximum results, default 10threshold(number, optional): Similarity threshold 0-1, default 0.5
Example:
{
"query": "project management",
"limit": 5
}How it works:
- With Ollama: Generates query embedding → cosine similarity search (cached for performance)
- Without Ollama: Token-based keyword matching (less accurate but functional)
Returns:
[
{
"path": "ProjectNote.md",
"similarity": 0.78,
"blocks": ["#Overview", "#Timeline"]
}
]Find nearest neighbors for a given embedding vector (advanced use).
Parameters:
embedding_vector(number[], required): 384-dimensional vectork(number, optional): Number of neighbors, default 10threshold(number, optional): Similarity threshold 0-1, default 0.5
Retrieve full note content with optional block extraction.
Parameters:
note_path(string, required): Path to the noteinclude_blocks(string[], optional): Specific block headings to extract
Example:
{
"note_path": "MyNote.md",
"include_blocks": ["#Introduction", "#Main Points"]
}Returns:
{
"content": "# Full note content...",
"blocks": {
"#Introduction": "Content of this section...",
"#Main Points": "Content of this section..."
}
}Get statistics about the knowledge base.
Parameters: None
Returns:
{
"totalNotes": 137,
"totalBlocks": 1842,
"embeddingDimension": 384,
"modelKey": "TaylorAI/bge-micro-v2"
}Once configured, you can ask Claude to use these tools naturally:
- "Find notes similar to my project planning document"
- "Show me a connection graph starting from my main research note"
- "Search my notes for information about [your topic]"
- "What's in my note about [topic]?"
- "Give me stats about my knowledge base"
┌─────────────────────────────────────────────────────────────┐
│ Claude Desktop │
│ (MCP Client) │
└─────────────────────────┬───────────────────────────────────┘
│
│ MCP Protocol (stdio)
│
┌─────────────────────────▼───────────────────────────────────┐
│ Smart Connections MCP Server │
│ ┌─────────────────────────────────────────────────────┐ │
│ │ index.ts (MCP Server + Tool Handlers) │ │
│ └────────────────┬────────────────────────────────────┘ │
│ │ │
│ ┌────────────────▼────────────────────────────────────┐ │
│ │ search-engine.ts (Semantic Search Logic) │ │
│ │ - getSimilarNotes() │ │
│ │ - getConnectionGraph() │ │
│ │ - searchByQuery() → searchByEmbedding() │ │
│ │ - searchByKeyword() (fallback) │ │
│ └────────────┬───────────────────────┬─────────────────┘ │
│ │ │ │
│ ┌────────────▼────────────────┐ ┌──▼──────────────────┐ │
│ │ ollama-client.ts │ │ embedding-utils.ts │ │
│ │ - Generate query embeddings│ │ - cosineSimilarity │ │
│ │ - Disk-based LRU cache │ │ - findNeighbors │ │
│ │ - Health check & fallback │ └─────────────────────┘ │
│ └────────────┬─────────────────┘ │
│ │ │
│ │ HTTP │
│ │ │
│ ┌────────────▼────────────────────────────────────────┐ │
│ │ smart-connections-loader.ts (Data Access) │ │
│ │ - Load .smart-env/smart_env.json │ │
│ │ - Load .smart-env/multi/*.ajson embeddings │ │
│ │ - Read note content from vault │ │
│ └────────────────┬────────────────────────────────────┘ │
└───────────────────┼─────────────────────────────────────────┘
│
┌───────────────┼───────────────┐
│ │ │
┌───▼──────┐ ┌─────▼─────────┐ ┌─▼────────────────────┐
│ Ollama │ │ File System │ │ .smart-env/ │
│ Server │ │ (vault *.md) │ │ query-cache/ │
│ :11434 │ │ │ │ embeddings.json │
└──────────┘ └───────────────┘ └──────────────────────┘
The server supports multiple embedding models depending on your Smart Connections configuration:
| Model | Dimensions | Notes |
|---|---|---|
| TaylorAI/bge-micro-v2 | 384 | Default Smart Connections model |
| nomic-embed-text-v2-moe | 768 | Higher quality, recommended for Ollama |
| Custom models | Variable | Auto-detected from vault embeddings |
Important: Your OLLAMA_MODEL must match the embedding model used in your Obsidian vault.
The server reads from Obsidian's Smart Connections .smart-env/ directory:
smart_env.json: Configuration and model settingsmulti/*.ajson: Per-note embeddings and block mappingsquery-cache/embeddings.json: Cached query embeddings (auto-created)
| Operation | With Ollama Cache | Without Cache | Keyword Fallback |
|---|---|---|---|
| Load time | 2-5s | 2-5s | 2-5s |
| First query search | ~500-800ms | ~500-800ms | ~100-200ms |
| Cached query | <50ms | N/A | ~100-200ms |
| Memory usage | ~30-40MB | ~20-30MB | ~20-30MB |
Query Cache Benefits:
- LRU eviction (max 1000 entries)
- Disk-persisted across restarts
- Significantly faster repeated searches
- Automatic cleanup of old entries
- Cosine similarity for all vector comparisons
- Range: 0.0 (unrelated) to 1.0 (identical)
- Configurable threshold per query
npm run buildnpm run watchexport SMART_VAULT_PATH="/path/to/your/vault"
npm run devsmart-connections-mcp/
├── src/
│ ├── index.ts # MCP server & tool handlers
│ ├── search-engine.ts # Semantic search logic (async)
│ ├── ollama-client.ts # Ollama integration & caching (NEW)
│ ├── smart-connections-loader.ts # Data loading
│ ├── embedding-utils.ts # Vector math utilities
│ └── types.ts # TypeScript type definitions
├── dist/ # Compiled JavaScript (generated)
├── package.json
├── tsconfig.json
└── README.md
- Ensure your vault has the Smart Connections plugin installed
- Verify embeddings have been generated (check
.smart-env/multi/directory) - Check that
SMART_VAULT_PATHpoints to the correct vault
- Run Smart Connections in Obsidian at least once to generate configuration
- Check for
.smart-env/smart_env.jsonin your vault
- Some notes may not have embeddings if they're too short (< 200 chars)
- Re-run Smart Connections embedding generation in Obsidian
- Verify the configuration file syntax (JSON must be valid)
- Check the file paths are absolute paths, not relative
- Restart Claude Desktop completely
- Check Claude Desktop logs for error messages
- Normal behavior - server continues to work with keyword matching
- To enable semantic search:
- Install Ollama: https://ollama.ai
- Pull embedding model:
ollama pull nomic-embed-text-v2-moe - Verify Ollama is running:
curl http://localhost:11434/api/tags - Restart Claude Desktop
- Your
OLLAMA_MODELdoesn't match your vault embeddings - Check your vault's model: Look in
.smart-env/smart_env.json→"embed_model" - Update
OLLAMA_MODELin your MCP configuration to match - Common combinations:
- Vault uses
TaylorAI/bge-micro-v2→ Ollama model:TaylorAI/bge-micro-v2 - Vault uses
nomic-embed-text-v2-moe→ Ollama model:nomic-embed-text-v2-moe:latest
- Vault uses
- First queries are slower (~500-800ms) while building cache
- Subsequent identical queries should be <50ms
- Check cache file exists:
.smart-env/query-cache/embeddings.json - Cache is LRU with max 1000 entries - old entries auto-removed
MIT
Daniel Glickman
- Built for use with Obsidian
- Integrates with Smart Connections plugin
- Uses Model Context Protocol by Anthropic