Open Brain — The infrastructure layer for your thinking. One database, one AI gateway, one chat channel — any AI plugs in. No middleware, no SaaS.
-
Updated
Apr 21, 2026 - TypeScript
Open Brain — The infrastructure layer for your thinking. One database, one AI gateway, one chat channel — any AI plugs in. No middleware, no SaaS.
The memory layer for AI-native development — giving AI persistent understanding of your software projects.
Human-like memory for AI agents — semantic, episodic & procedural. Experience-driven procedures that learn from failures. Free API, Python & JS SDKs, LangChain, CrewAI & OpenClaw integrations.
Shared Memory Storage for Multi-Agent Systems
TypeScript agent memory layer: semantic vector recall + SQLite-backed storage, Chroma or in-memory vectors, REST API, MIT.
The cognition layer that turns your agent into a HyperAgent
Benchmark suite for evaluating retrieval quality and latency of AI agent context systems
Synapse Layer — Continuous Consciousness Infrastructure for AI Systems. Persistent. Secure. 1-line integration.
The lightest universal AI memory layer. One SQLite file, any LLM, zero cloud. MCP + HTTP + CLI. Smart Recall, Knowledge Evolution, Auto-Capture, Interactive Dashboard.
Unified memory layer for AI coding agents: incremental transcript sync, ranked search, archive/restore.
🧠 Persistent memory for AI agents. SQLite for agent state. Zero cloud dependencies. Local embeddings. MCP-native integration with Claude Desktop/Code, Cursor, Windsurf & more.
Vendor-neutral memory layer for AI agents. Give ChatGPT, Claude, Cursor, Gemini, and Grok shared persistent memory. TypeScript SDK, MCP server, REST API.
Maximem Synap is the memory layer that makes AI agents remember. #1 on LongMemEval (90.2%). Works natively with LangChain, LlamaIndex, CrewAI, Google ADK, AutoGen, OpenAI Agents, Semantic Kernel, Haystack, and Pydantic AI.
Persistent identity and memory across AI tools - mcp-native, local-first, framework-agnostic, production-ready.
SQL Native Memory Layer for LLMs, AI Agents & Multi-Agent Systems
Memory and recall layer for AoA: explicit memory objects, provenance threads, temporal relevance, salience, and reviewable recall contracts.
Ultimate memory layer for local LLM's ~ mitigate limitations of Context Window.
Go bindings for Honcho - Persistent memory layer for AI applications. Type-safe API client for contextual conversation memory.
A memory layer for LLMs that extracts entities, relationships, and decisions from conversations and stores them in a semantic knowledge graph, giving any AI persistent, structured context across sessions.
Asynchronous messaging between Claude threads — built entirely in Markdown.
Add a description, image, and links to the memory-layer topic page so that developers can more easily learn about it.
To associate your repository with the memory-layer topic, visit your repo's landing page and select "manage topics."