Master Codex: Advanced AGENTS.md, Context Compaction, and MCP Tricks
This guide walks experienced developers through Codex’s advanced capabilities—layered AGENTS.md configuration, Context Compaction to prevent memory loss, Claude‑Code hybrid workflows, sandbox and Rules security controls, the extensible MCP protocol, profile switching, pipeline integration, and session management—to transform casual use into expert mastery.
1. Layered AGENTS.md configuration
AGENTS.md is a hierarchical instruction file read by Codex on startup. The file closest to the working directory overrides more distant ones, enabling three layers:
Layer 1: ~/.codex/AGENTS.md → personal long‑term preferences
Layer 2: <repo>/AGENTS.md → team‑wide conventions
Layer 3: <repo>/services/payments/AGENTS.override.md → strict overrides for a subdirectoryOverrides replace the base file; they do not merge. Each layer must stay under the default 32 KB limit or the project_doc_max_bytes setting must be increased. Splitting rules into sub‑directories is the recommended strategy.
2. Context Compaction
Long sessions can exceed token limits, causing earlier instructions to be forgotten and increasing cost. Codex provides two compression mechanisms:
Local handover note : when the context approaches capacity, Codex generates a summary containing progress, decisions, constraints, and remaining steps. The note is stored locally and re‑loaded on the next turn.
Server‑side compression : the OpenAI API can be called with context_management and compact_threshold to receive an encrypted compression token that the model uses as a condensed memory.
Practical management steps:
Task reset : run clear to start a fresh context for a new task.
Pre‑emptive compaction : invoke /compact before the token limit is reached.
Append‑only strategy : never edit previous messages; always add new ones to keep the cache valid.
3. Claude Code + Codex collaboration
Claude excels at reasoning, architecture design, and code review, while Codex is fast at code generation, batch file operations, and CI/CD automation. A typical serial workflow is:
Claude: understand requirements, design architecture
↓
Codex: generate code from design
↓
Claude: review and suggest improvements
↓
Codex: apply changes
↓
GPT‑5/Claude: produce documentationParallel collaboration lets the models work on different facets simultaneously (e.g., translation plus proofreading or rapid solution comparison). Key tips:
Assign distinct responsibilities—Claude designs and reviews, Codex generates.
Insert quality‑check points after each stage.
Use sandbox modes to control risk: -s read-only for reviews, -s workspace-write for implementation.
4. Sandbox mode and Rules
Sandbox modes restrict file‑system access. Example ~/.codex/config.toml:
sandbox_mode = "workspace-write"
[sandbox_workspace_write]
network_access = falseThe Rules system whitelists or forbids specific commands. Rules can be placed globally ( ~/.codex/rules.md) or per project, with project rules overriding globals. Example approval policies:
approval_policy = "on-request" # ask before each operation
approval_policy = "full-auto" # automatic execution (dangerous)
approval_policy = "never" # never ask (very risky)5. Model Context Protocol (MCP)
MCP extends Codex to call external tools such as file systems, APIs, databases, Git, Figma, or Notion. Common tool registrations:
# File system MCP
claude mcp add --transport stdio filesystem -- npx -y @anthropic/mcp-server-filesystem
# Figma MCP – generate code from designs
claude mcp add --transport http figma https://mcp.figma.com/mcp
# Postgres MCP
claude mcp add --transport stdio postgres -- npx -y @anthropic/mcp-server-postgres6. Profile configuration
Profiles provide reusable sets of model, sandbox, and policy settings, avoiding long alias chains. Example ~/.codex/config.toml:
[profiles.default]
model = "gpt-5.3-codex"
model_reasoning_effort = "high"
web_search = "cached"
[profiles.review]
model = "gpt-5.3-codex"
sandbox_mode = "read-only"
approval_policy = "never"
[profiles.quick]
model = "o4-mini"
model_reasoning_effort = "medium"
[profiles.ci]
model = "gpt-5.3-codex"Use a profile via the CLI, e.g. codex --profile review, codex --profile quick, or override temporarily with codex -m gpt-5.4 "fix this bug".
7. Pipeline mode
Codex can read from stdin, making it easy to embed in shell pipelines:
# Code review
git diff | codex -p "review these changes"
# Test failure analysis
npm test 2>&1 | codex -p "analyze why tests failed"
# Automated commit creation
codex -p "create commit" --allowedTools "Bash(git diff *),Bash(git commit *)"Structured JSON output is available with --output-format json for downstream automation: codex -p "summarize" --output-format json GitHub Actions integration example:
- name: Run Codex Review
run: |
git diff ${{ github.event.before }} ${{ github.sha }} |
codex -p "security review" --output-format json > review.json8. Session management
Resume interrupted work with codex --continue or pick a named session via codex --resume. Sessions persist for 30 days by default; the cleanupPeriodDays setting can extend this period.
codex --continue # continue the last session
codex --resume # select a saved sessionName a session for later retrieval:
/codex-rename api-migration
/resume api-migrationCross‑device sync is possible with codex --teleport <session_id>, which pulls the session state to another machine.
Old Meng AI Explorer
Tracking global AI developments 24/7, focusing on large model iterations, commercial applications, and tech ethics. We break down hardcore technology into plain language, providing fresh news, in-depth analysis, and practical insights for professionals and enthusiasts.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.
