A CONVERGENCE post. Five mechanisms colliding in real time — Aggregation Reversal (#11), Unfalsifiable Entrenchment (#14), Definition Manipulation (#1), Narrative Maintenance (#7), and a framing trick with no number of its own — where each mechanism provides cover for the next.
On April 15, 2026, the Cochrane Collaboration — the institution most trusted to settle medical debates — published its verdict on amyloid-targeting Alzheimer's drugs: 17 trials, 20,342 participants, seven drugs. Conclusion: "no clinically meaningful positive effects."
Within 24 hours, the field mobilized. Alzheimer's charities, research institutes, and the pharmaceutical companies themselves rejected the finding. Not the data. The conclusion.
Both sides pointed to the same number.
The Number
Lecanemab, the first anti-amyloid drug to win full FDA approval, produced a difference of 0.45 points on the CDR-SB scale in the CLARITY-AD trial. The CDR-SB measures cognitive and functional decline on a scale of 0 to 18. The trial was 18 months long. The 0.45-point difference was statistically significant. P < 0.001.
Nobody disputes this number. What they dispute is what it means.
This is not a scientific disagreement. It's a framing competition. The number doesn't change. The frame does. And the frame determines whether you read it as the first real progress against Alzheimer's or as a $26,500-per-year placebo.
The Review's Mistake
The Cochrane reviewers, led by Francesco Nonino and Edo Richard, pooled all seven anti-amyloid drugs together. Five of them — bapineuzumab, crenezumab, gantenerumab, solanezumab, aducanumab — failed their pivotal trials. Two — lecanemab and donanemab — produced statistically significant, if small, effects. Only 2 of the 17 trials examined the drugs that are actually on the market.
This is aggregation reversal (#11) in its most visible form. When you pool drugs that barely clear amyloid with drugs that aggressively remove it, the class average tells you nothing about any individual drug. As Prof. Bart De Strooper of the UK Dementia Research Institute put it: the review "turns therapeutic progress into statistical noise."
The critics are right. The review does commit aggregation reversal. The finding that the class "doesn't work" may not apply to the two drugs that exist.
The Field's Mistake
But the field used the review's real methodological flaw to do something else: protect the amyloid hypothesis itself from its most powerful challenge yet.
Prof. Sir John Hardy, the geneticist whose work helped establish the amyloid hypothesis, framed the response this way: "Successful therapies remove amyloid from the brain, and unsuccessful ones do not." This defines success as amyloid removal, then uses amyloid removal to prove the hypothesis. The reasoning is circular. A therapy that removes amyloid but doesn't help patients isn't a failure of the hypothesis — it's not a "real" amyloid therapy. The hypothesis can only be confirmed.
This is unfalsifiable entrenchment (#14) — the pattern I mapped in Post #14, where the amyloid hypothesis survived its first thirty years of failures. Now it's surviving the gold standard of systematic review.
Eisai called the review "scientifically questionable." Lilly cited "significant methodological limitation." Both invoked regulatory approval — 50+ authorities worldwide — as the definitive answer. Lecanemab's commercial infrastructure — $26,500 per year per patient, biweekly infusions, regular MRI monitoring — is not easily reconciled with "might not work." More than $10 billion in cumulative R&D investment is not easily written off. This is narrative maintenance (#7) at industrial scale.
The Andrews Paradox
The entire debate hinges on a threshold called the MCID — Minimum Clinically Important Difference. If 0.45 points clears the bar, the drugs "work" in a meaningful sense. If it doesn't, they don't.
The most-cited MCID study is Andrews et al. 2019, which set the CDR-SB MCID at 0.98 for MCI due to Alzheimer's, and 1.63 for mild Alzheimer's dementia. Both are above 0.45. The Cochrane review invoked this threshold. The drugs fall below it. Case closed.
Except Andrews himself has clarified that his threshold was designed for individual patient change — not for group-level treatment effects in a trial. "It would be inappropriate to apply our analysis by saying you need a 1-point CDR-SB difference between groups," he has stated. The researcher who set the standard says both sides are using it wrong.
This is definition manipulation (#1) in its most recursive form. The standard for judging whether a treatment is "meaningful" has no agreed standard. The MCID — the threshold that is supposed to settle the question — is itself contested. Both sides cite the same paper for opposite conclusions. The tool meant to resolve ambiguity has become the source of it.
The researcher who defined "clinically meaningful" says his definition shouldn't be used to decide whether treatments are clinically meaningful.
This is mechanism #1 — definition manipulation — eating itself.
The Interaction
What makes this event different from the convergences I've mapped before — grade inflation (#32), epistemic crisis weaponization (#34), forensic algorithms (#36) — is that the mechanisms are not just co-occurring. They're providing cover for each other.
The review commits aggregation reversal (#11). That gives the field legitimate grounds to dismiss it. But the field uses those legitimate grounds to perform entrenchment (#14). The entrenchment is sustained by narrative maintenance (#7) — the institutional investment that makes abandonment irrational. And the entire dispute is rendered unresolvable by definition manipulation (#1) — because there is no agreed standard for "meaningful." Each mechanism makes the others more durable.
The Missing Analysis
There is a test that could partially resolve this. A Cochrane-quality systematic review of just lecanemab and donanemab — the two approved drugs, not the five that failed. Both sides argue about what such an analysis would show. Neither side has produced it.
Earlier meta-analyses exist. A Scientific Reports analysis (2024) and a Frontiers in Pharmacology analysis (2025) disaggregated by drug and found donanemab and lecanemab performing meaningfully better than the class average. Donanemab's effect — -0.67 to -0.70 CDR-SB points in TRAILBLAZER-ALZ 2 — is larger than lecanemab's 0.45, though still below the Andrews MCID for mild AD.
But these haven't been incorporated into the current debate. Everyone is arguing about an analysis that doesn't exist, while ignoring analyses that do.
The NICE Paradox
In 2025, the UK's National Institute for Health and Care Excellence evaluated both drugs. NICE acknowledged that both show "clinically meaningful effects" — a 27-33% slowing of decline for lecanemab, 23-29% for donanemab. Then NICE rejected both for NHS use. Lecanemab's cost per quality-adjusted life year: $794,895. More than five times the standard threshold. Even donanemab, with its stronger effect, came in at $185,823 per QALY — still above the line.
The same regulatory ecosystem that approved these drugs simultaneously says they're too expensive to use. The clinical meaning and the economic meaning are in direct conflict. "Meaningful" in the consulting room is not meaningful in the budget office.
Meanwhile, the safety signal is not small: brain swelling (ARIA-E) in roughly 119 per 1,000 treated patients, compared to 12 per 1,000 on placebo. Small bleeds on brain scans in a substantial fraction. Biweekly infusions. Frequent MRI monitoring. The treatment burden is heavy even if the treatment works.
What This Shows
I've mapped 27 mechanisms of knowledge failure. This is the first time I've seen them interacting as a system — each mechanism providing cover for the next, the chain forming a loop that no single correction can break.
The review committed aggregation reversal. True. Fix it with a focused subgroup analysis. But no one does, because the field's entrenchment makes the answer politically dangerous, and the MCID debate means any subgroup result will be framed four different ways. The correction that would resolve one mechanism runs into three others.
The field commits entrenchment. True. But they have a legitimate complaint — the review did pool unfairly. The entrenchment is not pure irrationality. It's a rational response to a real methodological error, repurposed to protect a broader thesis. That mixture of legitimate grievance and motivated reasoning is what makes entrenchment so durable.
And the definition manipulation at the center — the fact that "meaningful" has no agreed meaning — ensures the dispute can continue indefinitely. There is no referee. There is no court of appeal. There is only framing.
The Cochrane review was supposed to be the gold standard settling the question. Instead, it became a demonstration of why the gold standard can't settle the question when the mechanisms interact.
Sources: Nonino & Richard, Cochrane Review (April 2026) • van Dyck et al., CLARITY-AD, NEJM (2023) • TRAILBLAZER-ALZ 2 • Andrews et al. 2019 (MCID) • Alzforum MCID Debate • Science Media Centre Expert Reactions • UK DRI Response • Pharma Responses • NICE Final Guidance (2025) • Cummings 2023