Video Production · April 2026

Ten Ways to Show It

Five grievance-reactive demos that expose Claude Code's failures. Five capability-forward demos that showcase SubQ Code's architecture. Together, the complete video.

Part A: Show the Failure

Each of these maps a documented Claude Code grievance to a SubQ Code resolution. The audience sees the pain first, then the fix.

The Context That Never Dies

Hero Demo · 3-4 min

The Context That Never Dies
context amnesia compaction spirals multi-file refactoring lossy compaction
demo script
step 1 Load a real production codebase (~500 files, ~100K LOC) into SubQ Code
step 2 Show the token counter — entire codebase fits with room to spare
step 3 Perform complex multi-file refactoring (rename core interface, update 47 consumers, update tests)
step 4 At the 2-hour mark, ask SubQ Code to recall a decision from hour 1 — perfect recall
contrast Claude Code: “Context low” at 45 min, compacts, misremembers the same decision
Kill shot: “Claude Code: 1M tokens. Still compacts in hours. Context amnesia. SubQ Code: 12,000,000 tokens. Perfect recall. Most sessions never compact.”

The Honest Agent

Trust Demo · 2-3 min

The Honest Agent
false completion test rigging 75% rework rate 30-40% verification overhead
demo script
step 1 Give SubQ Code a task with a subtle bug (off-by-one in pagination)
step 2 Show harness: compile → test → verify → diff review before “done”
step 3 Harness catches the bug: “Tests fail on edge case X. Root cause: ...”
contrast Claude Code: “Done! All tests pass!” — no test command was executed
Kill shot: “Claude Code claims ‘all tests pass’ without running tests. SubQ Code can’t claim completion until the tests actually pass.”

Read Before You Write

Architecture Demo · 2-3 min

Read Before You Write
read:edit collapse large codebase failures Columbia DAPLab 8/9
demo script
step 1 Open monorepo with shared interfaces, multiple services, cross-cutting concerns
step 2 Ask SubQ Code to modify shared authentication middleware
step 3 Show harness: reads middleware → consumers → tests → configs → dependency graph → THEN edits
step 4 Live read:edit ratio counter in corner — SubQ maintaining 8:1+ throughout
contrast Stella Laurenzo data: Claude Code read:edit ratio dropping from 6.6 to 2.0
Kill shot: “AMD’s AI Director tracked 234,760 tool calls. Claude Code stopped reading before editing. SubQ Code reads 8 files for every 1 it edits.”

No More Groundhog Day

Session Continuity Demo · 2-3 min

No More Groundhog Day
AI Groundhog Day cross-session amnesia 50-75% redundant work
demo script
day 1 Start feature: design data model, implement core logic, write tests. End session.
day 2 Resume. SubQ picks up immediately — starts on next uncompleted task. No re-reading.
contrast Claude Code Session 2: re-reading plan, re-examining files, 50-75% wasted on redundancy
day 4 Feature complete across 4 sessions with cumulative progress
Kill shot: “Claude Code devs spend 50-75% of each session re-explaining what the agent already did. SubQ Code picks up exactly where it left off.”

The Loop Breaker

Agentic Stability Demo · 2 min

The Loop Breaker
death spirals infinite loops avoidant personality token waste from spiraling
demo script
step 1 Give SubQ a genuinely hard bug (race condition, subtle memory leak)
step 2 Harness monitors: 3 consecutive bash failures → strategic pause → reassess
step 3 Loop detection fires. New diagnostic approach. Root cause found.
contrast “I’m going in circles. Let me take a step back. Wait — same hack, different context. Wait—”
Kill shot: “Claude Code: Unbounded thinking loop. Entire token quota consumed. Zero output. SubQ Code: Loop detected at attempt 3. New approach. Root cause found.”

Part B: Show the Architecture

These showcase SubQ Code’s unique capabilities — features no competitor offers. The audience sees what’s possible, not just what’s fixed.

The Three-Lane Research Race

Parallel Intelligence Demo · 2-3 min · Best stop-scroll visual

The Three-Lane Research Race
3 parallel agents live progress widget evidence-based planning
on-screen sequence
0:00 Full-screen terminal. User types a complex architectural question.
0:02 Widget appears: Researching codebase (3 agents). Three rows: anatomy, connections, tests with waiting icons (∘)
0:03 All three lanes flip to animated braille spinners. Live actions stream: grep "readConfig", read src/..., blast_radius classifyBelt
0:06 Tool-call counters climb independently per lane: 2 calls, 4 calls, 6 calls
0:08 tests finishes first with ✓ and dimmed row. connections follows. anatomy last.
0:11 Widget disappears. Combined evidence lands as markdown with file:line citations.
Why this stops scrolling: Three independent progress streams racing simultaneously in a terminal. Animated spinners, climbing counters, staggered completions. No other coding agent shows parallel intelligence like this.

The Plan Review Duel

Safety Gate Demo · 2-3 min

No YOLO Coding — Plan, Review, Then Execute
planning phase tool gating TUI + browser review deny → revise → approve
on-screen sequence
0:00 Agent in planning mode attempts an edit → hard block: Blocked — in planning phase, you can only write to PLAN.md.
0:02 Agent writes PLAN.md, calls exit_plan_mode.
0:03 Boxed URL appears in terminal. Full-screen overlay slides in: “📋 Plan Review · ↑↓ scroll · ←→ select”
0:06 Bottom of screen shows two buttons: ✗ Deny + Comment and ✓ Approve
0:08 Reviewer selects Deny. Prompt: “What needs to change?” Feedback entered.
0:11 YOUR PLAN WAS NOT APPROVED. Feedback: ... You MUST revise the plan.
0:16 Agent revises. Resubmits. Reviewer approves. Plan approved. Proceeding to execution.
Why this matters: The deny → revise → approve cycle is inherently dramatic. The hard tool gate (can’t write during planning) and dual review surface (terminal + browser) are unique to SubQ Code.

Shotgun Context Intake

Speed Demo · 1-2 min

Pre-Gather. Classify. Execute.
full codebase scan LLM relevance classification zero exploration turns
on-screen sequence
0:00 subq code --shotgun "add retry logic to API client"
0:01 [subq] shotgun: Scanning codebase...
0:02 [subq] shotgun: Found 1,432 files (~812,000 tokens)
0:03 [subq] shotgun: Classifying files by relevance...
0:04 [subq] shotgun: Focused: reading 9 high-relevance + 17 medium-relevance files...
0:06 Agent immediately starts editing — no exploration turns, no “let me read the file...” preamble
contrast Claude Code spends its first 15 tool calls figuring out what files exist
Why this works: The file count and token count are concrete, impressive numbers. The contrast between “pre-loaded with classified context” and “exploring blind” is immediately visceral.

Blast Radius Shockwave

Code Intelligence Demo · 2 min

A Code Intelligence Console, Not a Chatbot
blast radius mapping identifier search codebase Q&A
on-screen sequence — three rapid cuts
cut 1 subq context blast-radius spawnResearchAgent → “14 usages across 6 files” with grouped file:line TUI
cut 2 subq context identifiers research → ranked cards: function/class/type kinds, scores, “Called from:” chains
cut 3 subq context answer "how does planning mode work?" → streaming search panel with relevance bars, then markdown answer
Why this matters: Three different rich TUI displays in rapid succession. Blast radius shows impact mapping. Identifier search shows code intelligence. Codebase Q&A shows natural language understanding. SubQ Code looks like a code intelligence IDE, not a terminal chatbot.

Leverage Belt Promotion

Outcome Dashboard · 2 min · Strongest “I need that” visual

Measure Your Leverage. Level Up.
giant ASCII leverage number martial arts belt tiers trend charts codebase audit grading
on-screen sequence
0:00 subq leverage --trend
0:03 Giant orange ASCII block-letter number renders: 12.40x
0:04 Subtitle: 6.8h of effort → 84h of equivalent output
0:05 Belt card: 🥋 Blue Belt · Score: 540 with progress bar 68% → Brown Belt
0:06 Gaps: • Peak concurrent agents: 3/6 • Overnight sessions: 0/3
0:07 Trend bars: Apr 10 ███████ 7.2x (Green)Apr 17 ███████████ 10.4x (Blue)
0:09 Cut to subq audit → letter-grade dashboard: Typed A, Traversable B+, Test Coverage B, Feedback Loops C+, Self-Documenting A-
Belt tiers: White (started) → Yellow (2x amplification) → Green (5x + tool autonomy) → Blue (3+ concurrent agents) → Brown (fleet orchestrator) → Black (elite). Gamification creates aspiration. The giant ASCII number is visually bold. No other agent measures its own effectiveness.

The Cinematic Arc

The Most Dramatic Demo · 4-5 min unbroken

Spec to Approved Execution — One Continuous Take

Investigation → orchestration → safety rails → human review → measurable outcome. One uninterrupted sequence.

continuous take
phase 1 Planning mode active. Agent attempts edit → hard block. Safety gate works.
phase 2 /research fires → three-lane widget races live (anatomy, connections, tests)
phase 3 Agent writes PLAN.md → plan review overlay appears with Approve/Deny
phase 4 Reviewer denies. YOUR PLAN WAS NOT APPROVED. Agent revises.
phase 5 Reviewer approves on second pass. Execution proceeds. Feature built.
phase 6 subq leverage --trend → giant ASCII multiplier → belt promotion.
Why this is the hero sequence: It showcases parallel research, safety gating, human review, execution, and measurement in one unbroken flow. Far more cinematic than isolated feature demos.

Production Recommendations

Narrative Arc

Open with Stella Laurenzo data. Build through Part A grievances with real quotes. Resolve with Part B capability demos. Close with the cinematic arc + leverage dashboard.

credibility first, capability second, aspiration last

Visual Language

Part A: split-screen comparisons, on-screen quotes, data visualizations. Part B: full-screen terminal recordings showing live TUI widgets, progress indicators, and dashboards.

show the terminal, not slides

What NOT to Do

No basic file auditing. No toy examples. No “replaces developers” claims. Don’t mock Claude’s UI — focus on architectural limits vs. SubQ’s architectural answers.

substance over style

Duration & Formats

Full: 15-18 min (all 10 + cinematic arc + intro/outro). Highlight: 4-5 min (Concepts 1 + 6 + 10 + cinematic arc). Social clips: 30-60 sec each.

three formats, one narrative

Highlight Reel Priority

If you can only show 3 things: the three-lane research race (stop-scroll), the plan review deny cycle (drama), and the leverage belt dashboard (aspiration). These have no equivalent in any competitor.

unique > comparative

Social Clip Priority

Best 30-sec clips: shotgun context intake (speed), blast radius shockwave (three rapid TUI cuts), belt promotion (giant ASCII number). Each is self-contained and visually immediate.

designed for mute autoplay