Master Claude Code’s 1M‑Token Context: Proven Strategies to Manage, Compact, and Rewind

Claude Code now supports a 1 million‑token context window, but effective use hinges on disciplined context management—choosing when to continue, rewind, clear, compact, or delegate to sub‑agents, and applying three core concepts of context windows, compaction, and context rot to avoid performance pitfalls.

AI Code to Success
AI Code to Success
AI Code to Success
Master Claude Code’s 1M‑Token Context: Proven Strategies to Manage, Compact, and Rewind
The 1M‑token context window raises the ceiling for Claude Code, but the actual user experience hinges on how you manage that context.

Key insight

Context‑management ability determines whether a large window becomes an advantage or a trap.

Why a larger context can become a trap

A 1M‑token window gives you many possible actions after each model turn: Continue: keep the current session /rewind: jump back to a previous node and retry /clear: start a fresh session with a concise brief /compact: summarize the long history and continue Subagent: delegate a segment to an isolated context and retrieve only the conclusion

Choosing the wrong branch is often the root cause of failures, not a poorly written prompt.

Three core concepts

1) Context window

The context window is everything the model can "see" when generating the next response. It typically includes:

System prompt

Current conversation history

Tool call records and outputs

Read file contents

New user instructions

Claude Code’s 1M‑token window lets you handle longer, more complex task chains in a single session.

2) Compaction

Because the window has a hard limit, long sessions must be condensed into a shorter summary before continuing. Compaction can be triggered manually or automatically. It saves space and preserves continuity but is inherently lossy.

3) Context decay (context rot)

As the context grows, attention becomes diluted, old information creates noise, and the model’s focus on the current goal degrades. This is not a sudden memory loss but a "cluttered workspace" that reduces efficiency.

Branching actions after each round

Continue

: keep the current session /rewind: jump back to a historical node and retry /clear: open a new session with a hand‑off brief /compact: summarize history then continue Subagent: run a sub‑task in an isolated context and return only the final result

Habit 1 – Start a new session for a new task

Rule: If the task goal changes, open a new session. Example:

/clear

Task: Write release notes for auth refactor
Background: Token refresh and middleware refactor completed
Key files: src/middleware/auth.ts, docs/auth.md
Constraint: Do not modify business code, only output a draft document

Habit 2 – Use /rewind for error correction

Instead of patching a failed path, rewind to the point before the error and issue a fresh instruction. Example:

/rewind
# Return to after reading the critical files

Do not use plan A (module foo lacks the required interface)
Proceed with plan B and add unit and regression tests

Habit 3 – Treat /compact as a proactive strategy

Compact early and provide a focused instruction to guide the summary. Example:

/compact focus on:
1. Final auth refactor solution and constraints
2. Excluded paths and failure reasons
3. Only retain context related to bar.ts warning
4. Discard test‑tuning log details

Habit 4 – Distinguish the roles of /compact and /clear

/compact

lets the model summarize history and continue – easy but model‑driven. /clear requires you to write a hand‑off brief and start fresh – more effort but full control.

Choose based on the situation:

Consistent, same‑direction task → /compact Goal switch or boundary change → /clear Sensitive information requiring strict control → /clear Fast‑paced progress where you just need momentum →

/compact

Habit 5 – Use Subagent as a context isolator

Subagents isolate noisy intermediate output and return only the final conclusion. Typical uses:

Read another codebase and summarize the auth flow

Cross‑check a spec for acceptance testing

Generate a draft document from git changes

Run tests and categorize failures

Ask yourself: “Do I need the full process or just the final conclusion?” If it’s the latter, split it out.

Why bad compaction happens

Typical failure pattern: after a long debugging session an automatic compaction triggers, then the goal changes. The compacted summary may drop the new goal because the model deems it irrelevant, leading to loss of needed context. Bad compaction usually occurs at the point of highest context load – when context decay is strongest.

Three practical tips to avoid bad compaction

Proactively compact before the window becomes critical.

Always include a clear focus directive so the model knows the next step.

If the direction changes sharply, use /clear instead of relying on compression quality.

Reusable decision table

Decision table
Decision table
prompt engineeringlarge language modelClaudecontext managementAI workflow
AI Code to Success
Written by

AI Code to Success

Focused on hardcore practical AI technologies (OpenClaw, ClaudeCode, LLMs, etc.) and HarmonyOS development. No hype—just real-world tips, pitfall chronicles, and productivity tools. Follow to transform workflows with code.

0 followers
Reader feedback

How this landed with the community

Sign in to like

Rate this article

Was this worth your time?

Sign in to rate
Discussion

0 Comments

Thoughtful readers leave field notes, pushback, and hard-won operational detail here.