GBrain: Open-Source AI Memory Engine that Gives OpenClaw and Hermes Long-Term Recall
GBrain, an open‑source AI memory hub created by YC partner Garry Tan, combines Postgres tsvector keyword search with pgvector semantic search via RRF, manages thousands of Markdown notes, and runs an automated nightly agent that refines and links memories, offering a practical long‑term recall layer for agents like OpenClaw and Hermes.
When an AI agent such as “OpenClaw” must retrieve information from more than 10,000 Markdown files, traditional grep becomes too slow; response time degrades from milliseconds to over 30 seconds once the file count exceeds 3,000.
GBrain, an open‑source AI memory core released by YC partner Garry Tan, addresses this limitation by storing raw Markdown as the immutable fact source and using PostgreSQL only as a retrieval layer. It blends exact‑match keyword search (Postgres tsvector) with vector‑based semantic search (pgvector) and merges the results with the Reciprocal Rank Fusion (RRF) algorithm.
What makes GBrain different
Real‑world data scale : manages 21,000+ calendar events, 5,800 Apple notes, 280 meeting records, and 300 original ideas collected over 13 years.
Hybrid search architecture : combines keyword and vector search as described above.
Knowledge compounding model : a “compile‑facts + timeline” structure where the top layer holds dynamically updated conclusions and the bottom layer preserves an immutable evidence chain.
Night‑time dream loop : an AI agent runs during sleep to analyse the day’s dialogues, repair broken references and merge fragmented memories.
Core workflow
Entity detection : each incoming message is automatically parsed to identify people, companies, and concepts.
Three‑layer query : first query GBrain’s world knowledge, then the agent’s configuration memory, and finally the current conversation context.
Write‑back reinforcement : update the relevant entity pages, create cross‑references, and append a new entry to the timeline.
Technical rationale
With more than 3,000 Markdown files, grep latency jumps from milliseconds to >30 seconds.
The system must support both precise name lookup (e.g., “find Pedro Franceschi’s email”) and semantic queries (e.g., “advice on counter‑intuitive entrepreneurship”).
Knowledge and storage are decoupled: Markdown remains the single source of truth; PostgreSQL serves only as an index.
Quick start
bun add github:garrytan/gbrain
gbrain init --supabase
gbrain import ~/my_brain/
gbrain query "上次与OpenAI的合作进展如何?"For users who do not wish to maintain a PostgreSQL instance, the project ships a complete skill‑package that lets an AI agent automatically digest meeting notes, generate daily briefs, and monitor knowledge health.
Echoing Vannevar Bush’s 1945 Memex vision, GBrain aims to turn digital memory into an extension of thought rather than a burden.
Which approach do you think is better, GBrain’s or Karpathy’s?
AI Engineering
Focused on cutting‑edge product and technology information and practical experience sharing in the AI field (large models, MLOps/LLMOps, AI application development, AI infrastructure).
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.
