Grievance #1 · Dealbreaker

The Context That Dies

Claude Code's 1M window still compacts in hours. Compaction is lossy. Memory is zero. This is the single most complained-about issue across every platform.

The Architecture Problem

Advertised: 1M Context Window

The headline number — upgraded from 200K. But compaction still triggers on real engineering sessions. A large codebase with multi-file refactors fills this in hours, not days.

Effective: Compaction Still Triggers

Even at 1M, auto-compaction triggers on real-world sessions. The model summarizes the conversation, discarding file contents, architectural decisions, and nuance. There is no recovery path.

Post-Compaction: ~20K Summary

After compaction, the model retains a lossy summary. Users report it behaves "like a different, dumber model" — because it functionally is one.

Cross-Session Memory: Zero

Every new session starts cold. "You re-explain your tech stack. You re-describe your file structure. You re-state your preferences. Every. Single. Session."

context enters wide, exits narrow, and never comes back
Hidden drain: CLAUDE.md loads ~10K tokens every request. A 9-developer team found bills 3x higher than expected. See Cost & Rate Limits for the full token tax breakdown.

The Compaction Death Spiral

session #24179 — documented death spiral
00:00 Session starts. 1M tokens available. Model reads project files. normal
00:42 Context at 85%. Auto-compaction triggered. Summary generated. compacting
00:45 Post-compaction: model re-reads files it already processed. Context fills again. re-filling
00:58 Second compaction. Summary of a summary. Architectural decisions lost. degraded
01:15 Cycle 8. Model contradicts its own prior decisions. Starts re-implementing committed code. spiraling
02:30 Cycle 47. Model can no longer track file dependencies. Random edits begin. death spiral
04:00+ 211 compactions logged. Zero meaningful progress. Session abandoned. abandoned
GitHub #24179: "211 compactions in a single session with zero meaningful progress. The model can't escape the loop — each compaction loses enough context to force re-reading, which fills the window, which triggers another compaction."

Community Voices

"The more you let Claude compact — the more it degrades."
u/AuthenticIndependent · r/ClaudeCode
"Anyone else using Claude Code and realizing the real problem isn't the code, it's the lost context?"
u/Driver_Octa · r/ClaudeCode
"Rules like DESCRIBE tables are silently dropped from memory as context grows."
GitHub #32659
"We were happy with less context and quality work. The 1M context upgrade is destroying the very reason we chose Claude over GPT."
GitHub #39715
"You re-explain your tech stack. You re-describe your file structure. You re-state your preferences. Every. Single. Session."
DEV Community · Feb 2026
"You're 45 minutes into a Claude session. It was brilliant at the start — now it's contradicting itself and forgetting your instructions."
Stefan Wirth · YouTube · "Context Rot"

SubQ Code Answer

SubQ Code

12,000,000 Tokens

12x larger context. Entire codebases fit. Compaction triggers far less often — most sessions never hit the wall. Perfect recall at hour 2, hour 5, hour 10. The context window that outlasts the workday.

12x larger rare compaction perfect recall session continuity