The Architecture Problem
Advertised: 1M Context Window
The headline number — upgraded from 200K. But compaction still triggers on real engineering sessions. A large codebase with multi-file refactors fills this in hours, not days.
Effective: Compaction Still Triggers
Even at 1M, auto-compaction triggers on real-world sessions. The model summarizes the conversation, discarding file contents, architectural decisions, and nuance. There is no recovery path.
Post-Compaction: ~20K Summary
After compaction, the model retains a lossy summary. Users report it behaves "like a different, dumber model" — because it functionally is one.
Cross-Session Memory: Zero
Every new session starts cold. "You re-explain your tech stack. You re-describe your file structure. You re-state your preferences. Every. Single. Session."
The Compaction Death Spiral
Community Voices
"The more you let Claude compact — the more it degrades."u/AuthenticIndependent · r/ClaudeCode
"Anyone else using Claude Code and realizing the real problem isn't the code, it's the lost context?"u/Driver_Octa · r/ClaudeCode
"Rules like DESCRIBE tables are silently dropped from memory as context grows."GitHub #32659
"We were happy with less context and quality work. The 1M context upgrade is destroying the very reason we chose Claude over GPT."GitHub #39715
"You re-explain your tech stack. You re-describe your file structure. You re-state your preferences. Every. Single. Session."DEV Community · Feb 2026
"You're 45 minutes into a Claude session. It was brilliant at the start — now it's contradicting itself and forgetting your instructions."Stefan Wirth · YouTube · "Context Rot"
SubQ Code Answer
SubQ Code
12,000,000 Tokens
12x larger context. Entire codebases fit. Compaction triggers far less often — most sessions never hit the wall. Perfect recall at hour 2, hour 5, hour 10. The context window that outlasts the workday.