Twenty chapters · Three movements

Contents.

The book moves from mechanism to consequence to response. What the failure is, where it shows up, and what to do about it.

Part I · Mechanism — Chapters 1–8

How AI fakes the signals of finished work, and why the systems and incentives around it amplify the mistake.

CH 01
Why Finished Is Not the Same as Done
The hardest AI failure to catch is not the bizarre one. It is the respectable one: the output that is clean, plausible, and almost right.
Mechanism
CH 02
The Six Ways AI Fakes Competence
Once you know the recurring patterns, a lot of AI output stops feeling surprising. The surface changes, but the failure mechanisms repeat.
Taxonomy
CH 03
AI Learned This From Us
AI did not invent quality theater, shortcut culture, or proxy metrics. It learned them from the software industry and now reproduces them at machine speed.
Origins
CH 04
The Productivity Story Everyone Wants to Believe
The easiest way to sell acceleration is to measure everything except the cleanup. That is the story the AI market tells, and why so many teams want to believe it.
Metrics
CH 05
Why the Business Model Prefers Plausibility
If you only price generation, the machine looks cheap. If you price correction, rework, and escaped defects, the bargain starts to look absurd.
Economics
CH 06
The Internet In, the Internet Out
Models do not just learn code. They learn internet code: the polished snippets, the copied mistakes, and the anti-patterns that won the popularity contest.
Training data
CH 07
Why Long Context Still Falls Apart
Context windows got bigger. That did not make long conversations reliable. It mostly made the eventual drift more expensive.
Drift
CH 08
Why Verification Never Catches Up
Generation feels magical because the machine does it in seconds. Verification feels slow because reality is where the bill comes due.
Asymmetry
Part II · Consequence — Chapters 9–16

What happens to organizations, professions, and people when teams build process around an unreliable signal.

CH 09
The Lawyer, the Fake Cases, and the Machine
Mata v. Avianca is remembered as the canonical hallucination scandal. What it really exposed was something broader: a professional mistaking formatted confidence for evidence.
Law
CH 10
How Copilot Writes Vulnerabilities
Security failures are where AI's plausible output becomes easiest to quantify. The code runs, the feature works, and the vulnerability ships anyway.
Security
CH 11
When the Models Start Mailing It In
The industry likes to talk about model progress as if it moves in one direction. Real users know better. Models regress, drift, cut corners, and still arrive with the same confidence as before.
Regression
CH 12
The Multi-Agent Fantasy
Give one model a task and you get one plausible mistake. Chain five together and you get a workflow impressive enough to hide where responsibility disappeared.
Agents
CH 13
How Small Errors Become System Failure
The most expensive AI failures are rarely single errors. They are errors that get promoted into assumptions and then reused as if they had been verified.
Compounding
CH 14
What Happens When Teams Stop Trusting the Tool
Trust does not usually collapse all at once. It erodes in public, while dependence quietly keeps growing in the background.
Trust
CH 15
Where This Gets People Hurt
In safety-critical systems, the completion illusion stops being a quality problem and becomes a question of injury, liability, and preventable harm.
Safety
CH 16
Teaching Juniors the Wrong Instincts
A profession does not just ship code. It trains taste, judgment, and reflexes. If early-career engineers learn from systems that bluff, those habits compound too.
Profession
Part III · Response — Chapters 17–20

A working framework. Verification-First. Auditor mode. Standards before scale. And the choice the field is making, one team at a time.

CH 17
Verification First
The practical answer to the completion illusion is not better vibes, better prompts, or more trust. It is a workflow that treats generated output as guilty until proved useful.
Method
CH 18
Use AI Like an Auditor, Not an Oracle
The healthiest human-AI workflow is not conversational dependence. It is structured skepticism with the machine doing draft work and the human owning standards, checkpoints, and veto power.
Workflow
CH 19
Standards Before Scale
Good individual habits are not enough if the market rewards their absence. The last step is institutional: rules, audits, and accountability strong enough to survive pressure.
Institutions
CH 20
The Choice Ahead
By the end, the argument is not mainly technical. It is professional. A field that knows a failure mode and keeps scaling it anyway is making a choice.
Conclusion

Twenty chapters. 187 pages. One argument.

Buy on Amazon Read the Introduction

Tweaks