CodeBrain-1 and MemBrain1.5: Open‑Source SOTA Logic and Memory for Agentic AI

Feeling AI has open‑sourced CodeBrain-1 and MemBrain1.5, two agentic AI components that combine dynamic planning, hierarchical memory and a five‑layer architecture, achieve new SOTA scores on benchmarks such as Terminal‑Bench 2.0, cut token costs by 64%, and provide a full engineering stack for next‑generation AI agents.

Machine Heart
Machine Heart
Machine Heart
CodeBrain-1 and MemBrain1.5: Open‑Source SOTA Logic and Memory for Agentic AI

Feeling AI announced the open‑source release of CodeBrain-1 and MemBrain1.5 , two world‑model components that bring autonomous logic and hierarchical memory to agents, marking a shift from the “stateless” tool era to deep human‑AI collaboration.

CodeBrain‑1: A Compiler‑Eye for Agents

CodeBrain‑1 adds dynamic‑programming and strategy‑adjustment capabilities, improving task success rates in real‑world environments. It focuses on two key mechanisms: Useful Context Searching (using only truly useful context) and Validation Feedback (turning failures into actionable information).

When paired with the GPT‑5.3‑Codex base model, CodeBrain‑1 reached 72.9 % on the global Terminal‑Bench 2.0 benchmark, ranking among the top‑10 worldwide and the only Chinese team in the top tier.

CodeBrain‑1.5 further improves performance to 81.3 % , outperforming baseline models such as Claude‑Opus‑4.6, MiniMax‑M2.5, GLM‑5 and Qwen3.5. In a full‑task test, token cost dropped from $313.0 (Claude‑Opus‑4.6) to $112.9, a 63.9 % reduction, demonstrating that structured perception benefits both accuracy and cost.

The authors identify three shortcomings of current top agents (Claude Code, Cursor, OpenCode): inefficiency (dozens of ls / find calls), fuzziness (string match cannot distinguish calls from comments) and fragility (environment tweaks cause dead‑loops). To address these, CodeBrain‑1 implements a five‑layer, ~7,600‑line Python library + MCP server that wraps LSP language servers and tree‑sitter parsers into 11 intent‑driven tools:

Core layer : model, configuration, workspace, toolchain.

Engine layer : LSP engine with fallback chain, tree‑sitter search.

Tool layer : eight atomic operations for validation, navigation, search.

Skill layer : context diagnostics, impact analysis, symbol search.

MCP server : one‑click access to the 11 tools.

Key features include multi‑language support (Python, Go, TypeScript/JavaScript, C/C++), graceful fallback to CLI tools when language servers are unavailable, automatic Monorepo detection, zero‑framework coupling, and intent‑driven tool aggregation (e.g., validate, explore_symbol, search, check_impact, debug_trace, rename_symbol).

These capabilities enable agents to quickly grasp project structure, reduce code‑search cost via syntax‑aware tree‑sitter queries, tighten edit‑validation loops, make refactoring safer, accelerate debugging across five languages, and work reliably across environments.

MemBrain1.5: Re‑architecting Agentic Memory

MemBrain1.5 pushes memory benchmarks to new SOTA levels. Compared with its 1.0 predecessor, it achieves higher scores on LoCoMo, LongMemEval, PersonaMem‑v2 and KnowMeBench Level III, surpassing systems such as MemOS, Zep and EverMemOS, with >300 % improvement on the hardest levels.

The breakthrough lies in a native, hierarchical memory design that combines “rich‑context atomic facts” with an adaptive entity‑tree algorithm . Facts are stored as self‑contained mini‑graphs (renderable templates) that carry timestamps and aliases, avoiding the loss of context inherent in triple‑based graph stores.

When an entity appears in multiple facts, the system builds a semantic tree: the entity as the root, agent‑generated topic branches as intermediate nodes, and concrete facts as leaves. This tree supports online incremental maintenance—flat structures for few facts, automatically deepening as data grows, with agents deciding branch placement and splitting when overloaded.

Retrieval follows a progressive strategy: parallel full‑text, vector, and tree searches return results instantly; for complex queries, a multi‑query expansion rewrites the question into complementary forms; if still insufficient, an Agentic mode lets a reflective agent analyze gaps and issue targeted follow‑up searches.

MemBrain’s design reduces token consumption, improves latency, and provides auditability through explicit entity‑level links, addressing the shortcomings of pure‑text (high semantic fidelity but no explicit links) and pure‑graph (explicit links but semantic loss) approaches.

Engineering Experience and Future Directions

The authors stress that databases should be more than storage—they are core infrastructure for memory systems. Tight coupling of database and memory yields traceability, auditability, isolation and synchronization, dramatically lowering experiment iteration cost.

They also highlight the importance of the retrieval process itself as a signal: adjusting memory organization on‑the‑fly based on query paths can align the system with high‑frequency questions, though engineering challenges remain in ensuring stable incremental updates.

Looking ahead, Feeling AI plans to extend the InteractBrain/Skill/Render stack, integrate a new kinetic‑grounded transformer (IKGT) for 3D dynamic interaction, and continue advancing agentic memory as a foundational layer for world‑model AI.

CodeBrain open‑source URL: https://github.com/feelingai-team/CodeBrain

MemBrain open‑source URL: https://github.com/feelingai-team/MemBrain

open-sourcebenchmarkMemory SystemsCodeBrainMemBrain
Machine Heart
Written by

Machine Heart

Professional AI media and industry service platform

0 followers
Reader feedback

How this landed with the community

Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.