The Stella Laurenzo Data
Period
Median Thinking (chars)
vs Baseline
Jan 30 – Feb 8
~2,200
Baseline
Late February
~720
−67%
March 12+
~600
−73%
Read:Edit Ratio
Good Period
Transition
Degraded
Files read per edit
6.6
2.8
2.0
Stop hook violations: 0 → 173 in 17 days. The model stopped respecting its own constraints.
The Three Changes
feb 9
Opus 4.6 + Adaptive Thinking launched. No announcement of behavioral changes.
mar 3
Default thinking effort lowered from "high" to "medium". Silent change.
mar 8
Thinking content redacted from API responses. Users can no longer inspect reasoning.
mar 12+
Full degradation onset. Thinking depth at 27% of baseline. Edit-first behavior dominates.
apr 2
Stella Laurenzo files GitHub #42796 with 6,852 sessions of data.
apr 6
The Register headline: "AMD's AI director slams Claude Code for becoming dumber and lazier"
apr 7
Medium/Stackademic: "Claude Code's February Update Broke Complex Tasks — The 866-Point HN Thread Was Right"
The System Prompt Leak
5:1 Simplicity Bias: A leaked system prompt revealed that Claude Code's instructions carry a 5:1 ratio favoring "simplicity" over thoroughness. The model is structurally incentivized to take shortcuts. Community tool
tweakcc patches this out — users report dramatic improvements.
tweakcc
system prompt gist
5:1 ratio
community fix
Fireship 3.1M views
npm source map leak
"Claude cannot be trusted to perform complex engineering tasks."Stella Laurenzo · AMD Senior Director of AI
"It went from 135-150 IQ down to 90-100. Feels like it's turned into Sonnet 3.5."u/-becausereasons- · r/ClaudeCode · MAX subscriber
"If the system prompt indeed prefers laziness in 5:1 ratio, that explains a lot."murkt · HackerNews
"The specific turns where it fabricated had zero reasoning emitted."Boris Cherny · Claude Code Team Lead · HackerNews
SubQ Code Answer
SubQ Code
Architecture-Aware Reads
Read:edit ratio 8:1+. The agentic harness always reads the full dependency graph before editing. No shortcuts, no "edit-first" behavior. The harness enforces thoroughness — the model doesn't get to choose laziness.
SubQ Code
No Adaptive Thinking Trap
Fixed high reasoning budget every turn. No model self-shortchanging. No silent reduction from "high" to "medium." The harness controls the reasoning budget — not the model's cost-optimization instinct.