How agentic-stack Enables Cross‑Tool Memory Transfer for Large Language Models

The article introduces agentic‑stack, a portable .agent folder that lets eight AI coding tools share a unified memory, skill, and protocol system, detailing its four‑layer memory model, progressive skill disclosure, shim‑based adapters, review protocols, practical team scenarios, installation steps, and architectural design.

AI Open-Source Efficiency Guide
AI Open-Source Efficiency Guide
AI Open-Source Efficiency Guide
How agentic-stack Enables Cross‑Tool Memory Transfer for Large Language Models

Overview

.agent/ is a portable folder that enables Claude Code, Cursor, Windsurf and other AI coding tools to share a common memory, skill set and protocol, solving the cross‑tool memory transplantation problem for large language model‑based coding assistants.

Pain‑point scenarios addressed

Tool‑switch forgetting : workflows tuned in Claude Code are lost when switching to Cursor.

Conversation forgetting : each new session starts from scratch, requiring repeated explanations.

Experience not retained : past mistakes are re‑encountered because lessons are not persisted across sessions.

Mixed‑tool chaos : teams using different tools cannot enforce a unified workflow.

Core features

Architecture innovation : harness‑agnostic design; adapters act as thin shims while the core brain remains independent.

Four‑layer memory system : simulates short‑term to long‑term human memory with automatic compression and human‑in‑the‑loop review.

Skill system : progressive disclosure with self‑rewrite hooks.

Protocol‑driven safety : permissions.md enforces permission checks to prevent AI overreach.

Composability : eight tools can be mixed without forcing a single tool on the team.

Feature 1 – Four‑layer memory system

.agent/memory/
├── personal/      # user‑defined preferences
├── working/       # current task state (auto‑archived after 2 days)
├── episodic/      # experience logs (JSONL, scored by salience)
└── semantic/      # distilled patterns shared across sessions

Key advantages :

✅ Simulates short‑term → long‑term human memory.

✅ Automatic consolidation via auto_dream.py clusters nightly.

✅ Human‑AI collaborative review: AI stages candidates, humans graduate decisions.

✅ Structured storage in lessons.jsonl and rendered view in LESSONS.md.

Feature 2 – Skill system (progressive disclosure)

.agent/skills/
├── _index.md          # entry point for skill discovery
├── _manifest.jsonl    # machine‑readable metadata
├── skillforge/        # create new skills from patterns
├── memory-manager/   # memory reflection loop
├── git-proxy/        # safe Git operations
├── debug-investigator/# systematic debugging workflow
└── deploy-checklist/ # pre‑deployment checklist

Key advantages :

✅ On‑demand loading – full skill loads only when a trigger matches ( SKILL.md).

✅ Self‑evolution – skills with a self‑rewrite hook are auto‑marked for refactor after three failures.

✅ Separation of concerns – harness executes, skills encapsulate domain knowledge.

Feature 3 – Cross‑platform adapters

Adapters for each tool are thin shims that delegate to the core .agent/ brain. Example configurations:

Claude Code – CLAUDE.md + .claude/settings.json, supports PostToolUse and Stop hooks.

Cursor – .cursor/rules/*.mdc, requires manual reflect.

Windsurf – .windsurfrules, requires manual reflect.

Standalone Python – run.py works with any LLM and offers full control.

Key advantages :

✅ One configuration runs everywhere.

✅ Core brain stays in .agent/.

✅ Teams can mix tools while sharing the same memory and protocols.

Feature 4 – Review protocol

# List candidates
python3 .agent/tools/list_candidates.py

# Graduate with rationale
python3 .agent/tools/graduate.py <id> --rationale "..."

# Reject with reason (keeps decision history)
python3 .agent/tools/reject.py <id> --reason "..."

# Reopen a rejected candidate
python3 .agent/tools/reopen.py <id>

Key advantages :

✅ Prevents AI‑only decisions – every graduation needs human rationale.

✅ Decision traceability – rejections retain reasons for churn analysis.

✅ Idempotent safety – retries do not corrupt state.

Practical scenarios

Scenario 1 – Unified team AI workflow

Requirement: a 5‑person team split between Claude Code and Cursor needs consistent commit conventions and code‑review standards.

Install the .agent/ brain in the project.

Edit .agent/memory/personal/PREFERENCES.md to define team conventions.

All members share the same git-proxy skill and deploy-checklist across tools.

Outcome:

✅ Consistent commit message style across tools.

✅ Enforced pre‑deployment checklist.

✅ Onboarding time reduced from two weeks to two days.

Scenario 2 – Cross‑session lesson transfer

Requirement: after a timestamp bug, ensure future operations always use UTC.

python3 .agent/tools/learn.py "Always serialize timestamps in UTC" \
    --rationale "prior bugs from mixed local/UTC comparisons"

Outcome:

✅ Structured entry stored in lessons.jsonl.

✅ Future timestamp operations automatically trigger recall.

✅ Lesson applies across all eight tools.

Scenario 3 – Nightly automatic reflection

Requirement: let AI organize daytime experience without auto‑committing to Git.

# crontab entry for 3 AM nightly run
0 3 * * * python3 /path/to/project/.agent/memory/auto_dream.py

Outcome:

✅ Clusters recurring patterns in episodic memory.

✅ Generates candidate lessons in candidates/.

✅ Produces REVIEW_QUEUE.md for human audit.

Installation

macOS / Linux

# Homebrew (recommended)
brew tap codejunkie99/agentic-stack https://github.com/codejunkie99/agentic-stack
brew install agentic-stack

# Source install
git clone https://github.com/codejunkie99/agentic-stack.git
cd agentic-stack && ./install.sh claude-code

# Deploy to project
cd your-project
agentic-stack claude-code

Windows PowerShell

git clone https://github.com/codejunkie99/agentic-stack.git
cd agentic-stack
.\install.ps1 claude-code C:\path\to\your-project

Working principle

Core module 1 – Content clustering

File:

.agent/memory/cluster.py
def content_cluster(entries, threshold=0.3, min_size=2):
    """Single‑link hierarchical clustering based on Jaccard similarity.
    Key design:
    - Entry joins all similar clusters, then clusters merge → correct single‑link.
    - Avoids order dependence: A~B~C but A⊄C does not split clusters.
    """

Core module 2 – Pattern extraction

File:

.agent/memory/cluster.py:124
def extract_pattern(cluster):
    """Extract generalizable patterns from a cluster.
    Key design:
    - claim: reflection of the highest‑salience member.
    - conditions: shared vocabulary (greatest common divisor).
    - pattern_id: hash of claim + conditions for lifecycle tracking.
    """

Core module 3 – Active recall

File:

.agent/tools/recall.py
def recall(intent, top_k=3, min_score=0.01):
    """Lexical‑overlap retrieval of related entries.
    Key design:
    - condition weight 2× (explicit trigger side).
    - score = (claim_hits + 2*cond_hits) / (3*len(query_words)).
    - Uses lexical overlap, not semantic similarity, for efficiency.
    """

Technical architecture

┌─────────────────────────────────────────────────────────────┐
│               Adapters Layer (Claude, Cursor, …)          │
├──────────┬──────────┬──────────┬──────────┬─────────────────┤
│ Claude   │ Cursor   │ Windsurf│ OpenCode │ Standalone‑Py   │
│ Code    │          │          │          │                 │
└────┬─────┴─────┬────┴─────┬─────┴────┬─────┴─────────────┘
     │           │          │          │
     ▼           ▼          ▼          ▼
┌─────────────────────────────────────────────────────────────┐
│                Portable Brain (.agent/)                     │
├──────────────────┬──────────────────┬───────────────────────┤
│      Memory      │      Skills      │      Protocols          │
├──────────────────┼──────────────────┼───────────────────────┤
│ • personal/     │ • skillforge     │ • permissions.md        │
│ • working/      │ • memory‑manager│ • tool_schemas/          │
│ • episodic/     │ • git‑proxy     │ • delegation.md         │
│ • semantic/     │ • debug‑…       │                         │
└──────────────────┴──────────────────┴───────────────────────┘
     │           │          │          │
     ▼           ▼          ▼          ▼
┌─────────────────────────────────────────────────────────────┐
│                 Feedback Loop System                         │
│ episodic → auto_dream → candidates → graduate/reject        │
│                     ↓                                      │
│          lessons.jsonl ← recall.py                           │
└─────────────────────────────────────────────────────────────┘

Comparison with alternatives

Key differentiators of agentic‑stack versus native Claude Code memory, Cursor rules, and LangChain memory:

Cross‑tool consistency: supports eight tools (others limited to one or require custom integration).

Memory hierarchy: four‑layer model vs. single‑layer or flat structures.

Automatic reflection: built‑in auto_dream not present in alternatives.

Human‑AI review: CLI tools for staged approval, absent elsewhere.

Skill system: progressive disclosure with self‑rewrite hooks, limited or missing in competitors.

Protocol enforcement: permissions.md provides safety guarantees.

Installation complexity: one‑line command vs. built‑in or configurable setups.

Recommendation

Choose agentic‑stack when cross‑tool consistency is required; otherwise native memory suffices for single‑tool use, or LangChain can be considered for multi‑provider needs.

References

Project repository: https://github.com/codejunkie99/agentic-stack
Official docs: docs/architecture.md
Original Source

Signed-in readers can open the original source through BestHub's protected redirect.

Sign in to view source
Republication Notice

This article has been distilled and summarized from source material, then republished for learning and reference. If you believe it infringes your rights, please contactadmin@besthub.devand we will review it promptly.

memory-managementPythonLLMTool Integrationagentic-stack
AI Open-Source Efficiency Guide
Written by

AI Open-Source Efficiency Guide

With years of experience in cloud computing and DevOps, we daily recommend top open-source projects, use tools to boost coding efficiency, and apply AI to transform your programming workflow.

0 followers
Reader feedback

How this landed with the community

Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.