7 min read

Screens Explain 0.4% of Teen Wellbeing. We're Legislating Like It's 100%.

Screens Explain 0.4% of Teen Wellbeing. We're Legislating Like It's 100%.

Mechanism #13: Plausibility Capture — when an explanation is so intuitively satisfying that the evidence threshold for action drops to near zero.

This week, the CEO of Pinterest called for banning social media for everyone under 16. The World Happiness Report called major platforms "dangerous consumer products that harm adolescents at a massive scale." A jury in Los Angeles is deliberating whether Google and Meta are liable for a youth mental health crisis. Australia has deactivated 4.7 million teen accounts. Thirty-five US states have enacted phone or social media restrictions. The Surgeon General wants tobacco-style warning labels.

The confidence is total. The evidence is not.

The Number No One Wants to Talk About

In 2019, Amy Orben and Andrew Przybylski published a specification curve analysis in Nature Human Behaviour that tested 600 million possible analytical paths through three large datasets covering 355,358 adolescents. Their question: how much of the variation in adolescent wellbeing does digital technology use actually explain?

The answer: less than 0.4%.

What explains teen wellbeing? A scale of measured effects.

Being bullied
~10%
Sleep quality
~7.5%
Family support
~6%
Wearing glasses
~0.5%
Screen time
<0.4%
Eating potatoes
~0.35%

Variance explained in adolescent wellbeing. Source: Orben & Przybylski 2019, Nature Human Behaviour (n=355,358).

Screen time's measured effect on teen wellbeing sits between wearing corrective lenses and eating potatoes. The Surgeon General called for tobacco-style warning labels on something that, by the best available measurement, explains less of teen wellbeing than whether they need glasses.

That finding was published seven years ago. It has not been overturned. The policy cascade started anyway.

The Experimental Evidence Says Nothing Is There

Observational data can mislead — maybe kids who are already struggling use more social media. The gold standard is experiments: take teens off social media and see what happens.

Two preregistered meta-analyses have now done this. Both found the same thing.

Christopher Ferguson's 2024 meta-analysis — the first preregistered synthesis of social media experiments on mental health — found no average effect of social media interventions on mental health outcomes.

Lemahieu et al. (2025), published in Scientific Reports, pooled 10 studies covering 4,674 participants and 38 effect sizes. Social media abstinence produced:

g = 0.03
Positive affect
95% CI: −0.11 to 0.16
n.s.
Negative affect
Non-significant
g = 0.03
Life satisfaction
95% CI: −0.17 to 0.22

Effect sizes of 0.03 are, in practical terms, zero. The confidence intervals cross zero in every case. Duration of abstinence made no difference — quitting for a week and quitting for a month produced the same null result.

When you take teens off social media and measure what happens with preregistered protocols, nothing measurable happens.

The Longitudinal Data Agrees

The University of Manchester's 2025 longitudinal study followed 25,629 adolescents across three annual waves. Their conclusion: "No evidence that time spent on social media or gaming frequency predicted later internalizing symptoms among girls or boys."

Przybylski and Vuorre (2023) estimated the association between Facebook adoption and wellbeing across 72 countries using 946,798 individuals. They found "no evidence" of a consistent negative link. In many countries, the association was positive.

A January 2026 mediation study in Nature Humanities and Social Sciences Communications (n=50,231) found that screen time associates with mental health problems through sleep disruption and reduced physical activity — not directly. It's what screens displace, not screens themselves.

This matters because every policy proposal targets screen access. None target sleep or physical activity.

So Why Does Everyone Believe It?

Because the story is irresistible. Teen mental health IS declining. Smartphones DID arrive roughly when it started. The mechanism — dopamine hits, comparison spirals, attention fragmentation — feels right. And children are suffering, which makes demanding rigorous evidence feel like callous delay.

This is plausibility capture: when an explanation is so intuitively satisfying and morally urgent that the burden of proof inverts. Instead of "prove it causes harm before restricting," it becomes "prove it's safe before allowing." The precautionary principle replaces the evidence standard.

Jonathan Haidt's The Anxious Generation (2024) is the canonical text of this movement. It proposes four norms: no smartphones before 14, no social media before 16, phone-free schools, more unsupervised play. It has driven legislation across dozens of jurisdictions.

It has also been subjected to devastating scrutiny.

"The book's repeated suggestion that digital technologies are rewiring our children's brains and causing an epidemic of mental illness is not supported by science."

Candice Odgers, Nature, March 2024. Used the book as a teaching example of how correlation-based trend lines lead people to construct causal stories.

Aaron Brown audited Haidt's citations. Of the 476 studies cited in the book, two-thirds predate 2010 — before the period Haidt is writing about. Only 22 have data on either heavy social media use or serious mental health issues among adolescents. None have data on both.

When Haidt and Rausch attempted to rebut Ferguson's null meta-analysis with their own post-hoc subgroup analyses, independent statisticians concluded the reanalysis lacked "adequate statistical rigor to build a 'case for causality.'"

And then there is Haidt's own admission:

"I am promoting a social change program… and I am doing this before the scientific community has reached full agreement."

— Jonathan Haidt

This is the most honest thing anyone in this debate has said. It is also the definition of acting on plausibility rather than evidence.

Australia: The Real-Time Experiment

On December 10, 2025, Australia banned under-16s from social media. Platforms face fines up to A$49.5 million. In the first month, 4.7 million teen accounts were deactivated. Snap alone locked 415,000+ accounts.

Three months in, here is what we're watching:

Australia's Under-16 Social Media Ban — Status at 3 Months
Enforcement: Wildly inconsistent. Some teens lost access immediately; others never noticed. Workarounds surfaced "almost immediately."
Age verification: The government's own trial found age-estimation technology off by 2–3 years for younger users.
Collateral damage: Swept up adults and at-risk teens who depended on online communities for support.
Legal challenge: High Court challenge filed by Digital Freedom Project (Nov 2025). Reddit pursuing a separate challenge arguing accounts are protective.
Public opinion: 77% favor the ban. Only 25% believe it will work. 67% believe it won't.

That last number is the tell. A population that overwhelmingly supports a policy it overwhelmingly believes will fail. This is what plausibility capture looks like at the level of democratic governance: the story is so compelling that even skepticism about its practical implementation can't erode support for the principle.

Meanwhile, the UK rejected a blanket ban for under-16s in March 2026, opting instead for stricter age verification. India's Karnataka and Andhra Pradesh are moving to ban. France, Germany, Italy, Greece, Spain, and Malaysia are considering similar legislation.

The cascade is accelerating — even as the first country to try it watches the enforcement architecture collapse in real time.

What the Evidence Actually Supports

Here is what we know, stated honestly:

Teen mental health is declining. That's real. Anxiety and depression among adolescents have risen substantially over the past decade, particularly in English-speaking countries. The World Happiness Report 2026 documents this across multiple metrics. This is not in dispute.

The effect size for screen time is tiny. The best measurement across 355,358 adolescents places it below 0.4% of variance. This is not zero, but it is not an epidemic driver.

Experimental evidence shows no consistent causal effect. Two preregistered meta-analyses. Null results. Abstinence duration doesn't matter.

The causal arrow may point backwards. Longitudinal evidence suggests depression precedes increased social media use, not the other way around. Teens who are struggling go online more — screens are a symptom, not a cause.

It's displacement, not screens. The mediation evidence says the problem is what screen time replaces — sleep and physical activity — not screens per se. Targeting screen access rather than sleep hygiene and activity is like banning couches to prevent sedentary lifestyles.

The real drivers are unknown. Candidates include: economic inequality, adverse childhood experiences, climate anxiety, pandemic disruption, academic pressure, erosion of community institutions, and possibly specific aspects of social media content (not time). We don't know. That uncertainty is the most important finding, and the one most aggressively suppressed by the current narrative.

As Sander van der Linden wrote in Nature Health this month: "A ban does not empower youth. It confers no skills. It doesn't cultivate resilience against online harms. It merely postpones the problem."

And as Orben and Matias argued in Science: the research infrastructure itself is broken. Studies can't keep pace with platform deployment. Companies restrict data access. We need incident registries and parallel evidence gathering — not confident legislation built on uncertain science.

Mechanism #13

Plausibility capture is not the same as being wrong. It's possible that specific aspects of social media — algorithmic content, comparison dynamics, cyberbullying — contribute meaningfully to teen distress in ways current measurement hasn't isolated. The Science editor called the evidence "muddled" and "unsettled." That's honest.

But "unsettled" is not the same as "established." And the gap between the two is where policy has rushed in.

This mechanism is distinct from the twelve I've mapped before. In those cases — antidepressants, GLP-1 drugs, saturated fat, paper mills — the evidence was stronger than the system acknowledged. Inconvenient data was buried, suppressed, or ignored.

Here, the evidence is weaker than claimed. A tiny measured effect has been rhetorically inflated into an epidemic cause through selective citation, temporal correlation, intuitive plausibility, and moral urgency. The system isn't suppressing data. It's ignoring the data it already has.

Same structural failure. Inverted polarity. The relationship between evidence and action has broken down — it's just broken in the other direction.

Teen mental health matters. That's precisely why the evidence standard matters too. If we legislate away the wrong cause, the real causes continue unaddressed — and the children we're trying to protect keep suffering while we congratulate ourselves for doing something.