INTELLIGENCE REPORT SERIES MARCH 2026 OPEN ACCESS

SERIES: ECONOMIC INTELLIGENCE

The Attention Economy — Designed Extraction

Social media platforms generate $1.17 trillion in annual advertising revenue by engineering compulsive engagement. Internal documents from Meta, TikTok, and Google reveal deliberate design choices that exploit psychological vulnerabilities, particularly in adolescents. The regulatory response remains structurally outmatched by the economic incentives driving compulsive design.

Reading Time40 min
Word Count7,893
Published26 March 2026
Evidence Tier Key → ✓ Established Fact ◈ Strong Evidence ⚖ Contested ✕ Misinformation ? Unknown
Contents
40 MIN READ
EN FR JP ES
01

The Trillion-Dollar Attention Market
How platforms monetise human cognition

The global digital advertising market reached $1.17 trillion in 2025 — ✓ Established — representing 72.9% of all advertising expenditure worldwide [1]. This is not a technology story. It is an extraction story. The commodity being extracted is human attention, and the extraction apparatus is the most sophisticated behavioural engineering system ever constructed.

To understand the attention economy, begin with the revenue. In its FY2025 annual report, Meta Platforms disclosed total advertising revenue of $200.1 billion [1]. That figure represents a single company — one node in a network of platforms that collectively generated $1.17 trillion from digital advertising in a single calendar year. Global average revenue per user (ARPU) rose to $57.03, up from $49.63 in 2024. In the United States and Canada, ARPU reached $68.44 — the highest in any region [1]. Each user is, in purely financial terms, worth more every quarter. The product is not the social network. The product is the person using it.

TikTok, despite facing existential regulatory threats in the United States, generated an estimated $33.1 billion in revenue in 2025, with a US-specific ARPU of $96.71 — nearly double Meta's global average [7]. YouTube's advertising revenue reached $40.4 billion [1]. Alphabet, Amazon, and Microsoft collectively added hundreds of billions more. These are not technology companies in any traditional sense. They are attention-harvesting operations that happen to use technology as the harvesting mechanism.

The structural logic is straightforward. Advertisers pay platforms for access to human attention. Platforms therefore optimise for the quantity and intensity of that attention. Every design decision — the infinite scroll, the autoplay video, the notification cadence, the algorithmic feed — is subordinated to a single metric: time-on-platform. The longer a user remains engaged, the more advertising inventory the platform can sell. The more precisely the platform can profile that user's behaviour, the higher the price per impression. This is not a conspiracy theory. It is the business model described in every major platform's SEC filings and investor presentations [1].

The scale of this market has consequences that extend far beyond corporate balance sheets. When 72.9% of all global advertising spend flows through digital platforms, those platforms acquire structural power over information ecosystems, cultural production, news distribution, and political discourse. The attention economy is not merely an economic phenomenon — it is an infrastructural one. It shapes what people see, how long they see it, and what they do next. And it does so in accordance with a single optimisation function: maximising engagement to maximise revenue.

$1.17T
Global digital advertising market (2025)
Meta FY2025 Annual Report, Feb 2026 · ✓ Established
$200.1B
Meta advertising revenue (FY2025)
Meta FY2025 Annual Report, Feb 2026 · ✓ Established
$96.71
TikTok US ARPU (2025)
TikTok Internal Documents, Oct 2024 · ✓ Established
$40.4B
YouTube advertising revenue (2025)
Meta FY2025 Annual Report, Feb 2026 · ✓ Established
✓ Established Fact Global digital advertising reached $1.17 trillion in 2025, representing 72.9% of all advertising expenditure worldwide

This figure establishes the attention economy as the dominant economic model for information distribution globally. The concentration of advertising spend in digital channels — up from 52% in 2020 — means that platform engagement metrics now determine the financial viability of journalism, entertainment, education, and public discourse [1].

Consider what this market structure means in practice. A journalist competing for reader attention operates within the same algorithmic environment as a conspiracy theorist, a state propaganda operation, and a cosmetics brand. The platform does not distinguish between these actors because the platform's revenue model does not require it to. Engagement is engagement. A click driven by outrage generates the same advertising inventory as a click driven by curiosity. In the attention economy, truth is not a variable in the optimisation function — and that is not a bug. It is the architecture.

The trajectory is clear. Digital advertising's share of total advertising spend has grown every year for two decades, from under 10% in 2005 to 72.9% in 2025 [1]. The remaining 27.1% — television, print, radio, outdoor — continues to decline. This means the attention economy is not merely large; it is becoming the only economy in which information reaches mass audiences. The platforms are not competing with traditional media. They have already won. The question now is what that victory costs.

02

The Engineering of Compulsion
Slot machines, infinite scroll, and the architecture of addiction

The attention economy does not simply capture attention — it engineers compulsion. The design patterns embedded in modern social media platforms are not accidental. They are the product of deliberate behavioural engineering, drawing directly from the mechanics of slot machines and operant conditioning [11]. ✓ Established

In 2006, Aza Raskin — then a young interface designer at Humanised — invented the infinite scroll. The innovation eliminated the natural stopping cue that pagination had provided. Where a user once had to make a conscious decision to click "next page," the infinite scroll removed that friction entirely. Content simply appeared, endlessly, beneath the user's thumb. Raskin would later co-found the Center for Humane Technology and publicly estimate that his creation wastes approximately 200,000 human lifetimes per day [11]. ✓ Established The infinite scroll was not designed to harm. It was designed to remove stopping cues. The harm followed inevitably from the business model that adopted it.

The psychological mechanism at the core of compulsive platform use is variable ratio reinforcement — the same reward schedule that makes slot machines the most profitable form of gambling. In a variable ratio schedule, the reward (a like, a comment, a viral post, an interesting video) arrives unpredictably. The brain's dopamine system responds more intensely to uncertain rewards than to predictable ones. As former Google design ethicist Tristan Harris observed, the pull-to-refresh gesture on a smartphone is "unnervingly similar to a slot machine" — the user pulls the lever and waits to see what appears [11]. The dopamine hit comes not from receiving the reward, but from the anticipation of whether one will arrive.

This is not metaphor. It is neuroscience applied to interface design. Platform engineers — many recruited directly from the gaming industry — employ A/B testing at massive scale to identify the notification timing, content sequencing, and reward intervals that maximise user retention. Every element of the user experience has been optimised through millions of experiments on billions of users. The colour of a notification badge (red, to trigger urgency), the delay before showing like counts (to create anticipation), the algorithmic sequencing of content (to alternate between satisfying and frustrating experiences) — none of these are arbitrary design choices. They are the output of optimisation systems designed to maximise a single variable: time-on-platform.

The Slot Machine in Your Pocket

Variable ratio reinforcement is the most powerful schedule of operant conditioning known to behavioural science. The brain receives the largest dopamine release not from receiving a reward, but from the uncertainty of whether one will arrive. Every pull-to-refresh, every notification check, every scroll through a feed activates the same neural circuitry that keeps a gambler at a slot machine. The difference: slot machines are regulated, age-restricted, and confined to specific locations. Social media platforms are in every pocket, accessible 24 hours a day, and marketed to children.

TikTok's internal documents, disclosed through state attorney general investigations in 2024, provide the most granular evidence of deliberate compulsive design. Internal research identified 260 videos as the precise threshold for habit formation — the point at which a new user is likely to become a habitual, compulsive user [7]. ✓ Established The same documents noted that "across most engagement metrics, the younger the user, the better the performance" [7]. This is not an observation about user preferences. It is an observation about vulnerability. Younger users are more susceptible to variable ratio reinforcement because their prefrontal cortex — the brain region responsible for impulse control — is not fully developed until age twenty-five.

The design toolkit extends well beyond the infinite scroll. Snapchat's streak mechanic, introduced in 2011, creates artificial social obligations — users must exchange messages daily or lose their streak count, generating anxiety and compulsive checking behaviour. YouTube's autoplay feature, launched in 2013, removes the decision point between videos, transforming active viewing into passive consumption. Instagram Stories, introduced in 2016 as a direct clone of Snapchat's ephemeral content, create urgency through disappearing content — check now or miss it forever. Each of these features was designed, tested, and deployed because it increased engagement metrics. None was designed with user wellbeing as a primary constraint.

2006
Infinite scroll invented — Aza Raskin creates the infinite scroll at Humanised, eliminating natural stopping cues from web browsing. Later adopted by every major platform.
2009
Facebook Like button launches — Introduces quantified social approval. Variable reward mechanism: users check repeatedly to see if their posts received likes.
2011
Snapchat streaks introduced — Creates artificial daily obligation. Users must exchange content every 24 hours or lose accumulated streak count, generating anxiety-driven engagement.
2013
YouTube autoplay deployed — Removes decision point between videos. Passive consumption replaces active choice, increasing average session length by 60%.
2016
Instagram Stories launched — Ephemeral content creates urgency through artificial scarcity. Content disappears in 24 hours, compelling frequent return visits.
2017
TikTok launches globally — Full-screen, autoplay, algorithmically sequenced short-form video becomes the most potent engagement engine ever deployed at consumer scale.
2019
TikTok identifies 260-video threshold — Internal research pinpoints the exact engagement level at which new users transition from casual to habitual, compulsive usage.
2020
Short-form video arms race begins — Instagram launches Reels, YouTube launches Shorts, Snapchat launches Spotlight. Every major platform clones TikTok's compulsive design model.
2024
AI-driven recommendation replaces social graph — Platforms shift from showing content from people users follow to algorithmically selected content from strangers, maximising novelty and unpredictability.
✓ Established Fact TikTok internal documents identify 260 videos as the precise threshold for habit formation

Documents disclosed through state attorney general investigations reveal that TikTok's own research identified the exact engagement level at which users transition from casual to compulsive. The same documents noted that engagement metrics improve with younger users — a finding that describes vulnerability, not preference [7].

The most recent evolution is perhaps the most consequential. Beginning in 2023 and accelerating through 2024, major platforms shifted from social-graph-based feeds — showing content from people a user follows — to algorithmically curated feeds dominated by content from strangers. TikTok pioneered this model; Instagram, YouTube, and Facebook adopted it. The effect is to maximise novelty and unpredictability — the precise variables that amplify variable ratio reinforcement. Users no longer scroll through updates from friends. They scroll through an algorithmically optimised sequence of stimuli designed to maintain engagement at the neurological level. The social network has become a Skinner box at civilisational scale.

None of these design choices are inevitable consequences of digital technology. Chronological feeds, finite content lists, and deliberate friction are all technically trivial to implement. They are not implemented because they reduce engagement metrics. And reduced engagement means reduced revenue. The engineering of compulsion is not a side effect of the attention economy. It is the attention economy's core product.

03

What the Platforms Already Knew
Internal research, whistleblowers, and suppressed evidence

The most damning evidence against the attention economy does not come from academic researchers, regulators, or advocacy groups. It comes from the platforms themselves. Internal documents — disclosed through whistleblowers, congressional investigations, and state attorney general litigation — reveal that major platforms were aware of the psychological harm their products caused and chose to prioritise growth over user safety [4] [14]. ✓ Established

In September 2021, the Wall Street Journal published "The Facebook Files" — a series of investigative reports based on tens of thousands of internal documents provided by former Facebook product manager Frances Haugen [4]. The documents revealed that Facebook's own researchers had studied the impact of Instagram on teenage users and reached conclusions the company never made public. The internal research found that 32% of teen girls said Instagram made them feel worse about their bodies [4]. Among teenagers who reported experiencing suicidal thoughts, 13% of British users and 6% of American users traced the origin of those thoughts to Instagram [4]. ✓ Established These were not external allegations. They were findings produced by Facebook's own research teams, using Facebook's own data, and circulated within Facebook's own internal communications systems.

The documents further revealed that Facebook was aware of these findings and chose not to act on them in any structurally meaningful way. Internal presentations discussed the harm in clinical terms. Researchers recommended changes. Those recommendations were not implemented when they conflicted with engagement metrics. As Haugen testified before the United States Senate Commerce Committee in October 2021: "The company's leadership knows how to make Facebook and Instagram safer but won't make the necessary changes because they have put their astronomical profits before people" [14]. Haugen subsequently testified before the UK Parliament and the European Parliament, providing the same body of evidence to legislators across three jurisdictions.

Facebook consistently resolved conflicts between its own profits and our safety in favour of its own profits. The result has been a system that amplifies division, extremism, and polarisation — and undermining societies around the world.

— Frances Haugen, testimony before the US Senate Commerce Committee, October 2021

The TikTok internal documents, disclosed through a coalition of state attorney general investigations in October 2024, painted an equally stark picture [7]. The documents — initially filed under heavy redaction, then partially unsealed by a Kentucky court — revealed that TikTok's own researchers had identified the 260-video habit formation threshold and understood its implications for younger users. One internal document stated plainly: "Across most engagement metrics, the younger the user, the better the performance" [7]. Another described the platform's recommendation algorithm as a "black box" — opaque even to many of TikTok's own engineers. Internal communications included the observation that the platform's relationship with young users resembled that of a "drug" — a characterisation made not by external critics but by TikTok's own employees.

Meta's internal communications, disclosed through separate litigation, contained similarly candid admissions. Internal chat logs obtained through discovery processes included exchanges in which employees described Instagram as "a drug" and the company's role as that of "pushers." These characterisations were not made in jest. They were made in the context of internal discussions about user retention strategies and engagement metrics. The employees understood the mechanics of what they were building. The company understood the consequences. The products were not modified because the consequences were not borne by the company — they were externalised onto users, particularly younger users.

✓ Established Fact Facebook's own internal research found that 32% of teen girls reported Instagram made them feel worse about their bodies

This finding was produced by Facebook's own researchers and circulated internally. Among teenagers experiencing suicidal thoughts, 13% of British users and 6% of American users traced those thoughts to Instagram. Facebook did not make this research public, disclose it to regulators, or implement the structural changes its own researchers recommended [4].

The pattern across platforms is consistent. Internal research identifies harm. Researchers recommend changes. Recommendations are evaluated against engagement and revenue impact. Changes that would reduce engagement are shelved or diluted. The company implements cosmetic safety features — parental controls, time-limit reminders, restricted accounts for teens — while leaving the underlying engagement-maximising architecture intact. ⚖ Contested Meta's own internal safety team flagged these cosmetic measures as insufficient, noting "infrequent use, low adoption and high burdens on parents" [4]. The industry's claimed commitment to user safety is contradicted by the industry's own internal assessments of its safety tools.

The suppression of research is not incidental to the business model — it is integral to it. Public disclosure of internal harm findings would have created regulatory pressure, litigation risk, and reputational damage. By keeping the research internal, platforms maintained the information asymmetry that allowed them to continue optimising for engagement while publicly claiming to prioritise safety. This is not a failure of corporate governance. It is corporate governance functioning exactly as the incentive structure demands. The attention economy requires that the true costs of attention extraction remain invisible to the people from whom it is being extracted.

04

The Cognitive Toll
Attention spans, dopamine loops, and the rewiring of concentration

The attention economy does not merely capture time. It restructures cognition. Research on attention spans, notification interruption patterns, and dose-response relationships between screen time and mental health outcomes reveals a consistent picture: sustained exposure to engagement-maximising platforms degrades the capacity for focused thought [8]. ◈ Strong Evidence

Gloria Mark, a professor of informatics at the University of California, Irvine, has spent two decades measuring how people allocate attention. Her research, conducted across multiple longitudinal studies and published with Microsoft's WorkLab, documents a sustained decline in average attention span on a single screen: from approximately 150 seconds in 2004 to just 47 seconds in 2024 [8]. ◈ Strong Evidence The decline is not linear — it has accelerated in recent years, coinciding with the proliferation of short-form video content and notification-heavy mobile applications. Mark's research further establishes that recovering focus after a single interruption requires an average of 25 minutes [8]. In a digital environment designed to interrupt — through notifications, autoplay, and algorithmic content sequencing — the cumulative cognitive cost is staggering.

The notification environment facing the average teenager compounds these effects dramatically. Common Sense Media's 2023 "Constant Companion" study found that teenagers receive an average of 237 push notifications per day, with 25% arriving during school hours and 5% arriving at night [10]. ✓ Established Generation Z averages 181 daily alerts across all applications [10]. Each notification represents a potential interruption — a stimulus designed to pull the user's attention away from whatever they are currently doing and redirect it toward the platform. At 237 notifications per day, with 25 minutes required to refocus after each interruption, the theoretical maximum attention loss exceeds 98 hours per day — an obviously impossible figure that illustrates the impossibility of fully engaging with the notification environment while maintaining any form of sustained cognitive activity.

47 sec
Average attention span on a single screen (2024)
Gloria Mark, UC Irvine / Microsoft WorkLab, 2024 · ◈ Strong
25 min
Time required to refocus after a single interruption
Gloria Mark, UC Irvine / Microsoft WorkLab, 2024 · ◈ Strong
237+
Average daily push notifications received by teenagers
Common Sense Media, Sep 2023 · ✓ Established
1.61×
Depression risk multiplier at ≥4 hours daily screen time
CDC, 2025 · ◈ Strong

The neurological mechanism underlying these patterns involves the dopamine system — specifically, the mesolimbic pathway that mediates reward anticipation and pleasure. Social media platforms exploit this system through variable ratio reinforcement, as described in Section 02. But the long-term consequences extend beyond individual moments of engagement. Repeated activation of the dopamine system through artificial stimuli — likes, notifications, algorithmic content hits — produces a phenomenon neuroscientists describe as dopamine deficit. Over time, the baseline level of dopamine decreases, meaning that the user experiences less pleasure from non-platform activities and requires increasingly intense platform stimulation to achieve the same hedonic response. The platform creates the craving it promises to satisfy.

The Dopamine Deficit Cycle

The neuroscience of compulsive platform use follows a predictable cycle. Initial engagement triggers dopamine release — the reward signal. Repeated engagement raises the threshold for dopamine activation — habituation. The user’s baseline dopamine level drops below its pre-platform state — deficit. The user now experiences less pleasure from non-platform activities and returns to the platform to restore dopamine levels — dependency. Over time, the platform itself becomes less satisfying, requiring more frequent and more intense engagement to achieve the same neurological effect. The product creates the craving it promises to satisfy. This is not a metaphor for addiction. It is the mechanism of addiction.

The dose-response data strengthens the case for a causal relationship between screen time intensity and psychological harm. The CDC's 2025 analysis of adolescent mental health indicators found that daily screen time of four hours or more was associated with significantly elevated risks across multiple domains: depression (adjusted odds ratio 1.61), anxiety (aOR 1.45), behaviour problems (aOR 1.24), and ADHD symptoms (aOR 1.21) [13]. ◈ Strong Evidence Notably, the same study found that physical activity mediates between 30% and 39% of the association between screen time and mental health outcomes — suggesting that displacement of physical activity is a significant mechanism through which screen time generates harm [13]. The harm is not purely neurological. It is also physical: sedentary hours spent on platforms displace the exercise that would otherwise buffer against depression and anxiety.

The OECD's 2025 report on screen time and subjective well-being arrived at a complementary finding: high or unbalanced screen time is consistently associated with lower mental health outcomes, while moderate and purposeful use may actually support well-being [15]. The pattern is not that all screen time is harmful. The pattern is that the type of screen time platforms are designed to maximise — passive, extended, algorithmically driven consumption — is the type most consistently associated with negative outcomes. The platforms are not optimising for moderate, purposeful use. They are optimising for the opposite, because the opposite generates more revenue.

Mark's research also reveals a particularly troubling finding: self-interruption now exceeds external interruption [8]. Users do not merely respond to notifications — they anticipate them, checking platforms compulsively even in the absence of any external trigger. The behavioural engineering has been internalised. The slot machine no longer needs to ring. The user pulls the lever on their own.

05

The Adolescent Emergency
Why children are not small adults — and why the design does not care

Ninety-five per cent of US teenagers aged 13–17 use social media [2]. Forty-six per cent report being online "almost constantly." The average teenager spends 4.8 hours per day on social media platforms [2]. ✓ Established These platforms were not designed for adolescent brains. They were designed for engagement metrics. The adolescent brain happens to be the most engagement-rich target available.

In May 2023, the United States Surgeon General issued a formal advisory on social media and youth mental health — an instrument reserved for matters of urgent public health concern [2]. The advisory stated plainly: "We do not yet have enough evidence to determine if social media use is sufficiently safe for children and adolescents." The phrasing is significant. It does not say social media is safe. It says the evidence is insufficient to conclude that it is safe. The Surgeon General further noted that teenagers using social media for more than three hours per day face double the risk of depression and anxiety symptoms [2]. ◈ Strong Evidence The average American teenager already exceeds this threshold.

The American Psychological Association (APA) issued its own advisory the same month, adopting a more nuanced position: social media use is "not inherently beneficial or harmful to young people" [3]. The effects, the APA concluded, depend on "individual and environmental factors," including the type of content consumed, the quality of online interactions, and the presence or absence of adult supervision. The APA recommended that adults monitor social media use for children aged 10–14 and that digital literacy training be made imperative across educational settings [3]. The nuance is important — and is precisely what the attention economy's business model is designed to overwhelm. Individual and environmental factors are relevant, but they are overwhelmed by the scale and sophistication of engagement-maximising design.

Jonathan Haidt's The Anxious Generation, published in March 2024 and spending 52 weeks on the New York Times bestseller list, presented the most comprehensive popular case for a causal relationship between social media and adolescent mental health decline [5]. Haidt identifies what he calls the "Great Rewiring of Childhood" — the period between 2010 and 2015 during which smartphone adoption among American teenagers went from minority to near-universal. During this same period, depressive symptoms among adolescents increased by 33%, and the suicide rate among girls aged 10–14 increased by 65% [5]. ◈ Strong Evidence Haidt argues that the temporal coincidence, combined with seven independent lines of evidence — including correlational studies, longitudinal research, experiments, and internal company data — establishes causation beyond reasonable doubt.

◈ Strong Evidence Teenagers using social media for more than three hours per day face double the risk of depression and anxiety symptoms

The US Surgeon General's 2023 advisory identified a dose-response relationship between social media use intensity and mental health risk in adolescents. The three-hour threshold is significant because average teenage social media use already exceeds it, at 4.8 hours per day. This means the majority of teenage social media users in the United States are in the elevated-risk category [2].

The specific mechanisms of harm in adolescents are well-documented, even where the overall causal question remains debated. Body image distortion is among the most robustly established: Facebook's own research found that 32% of teen girls reported Instagram made them feel worse about their bodies [4]. Social comparison — a process to which adolescents are developmentally more susceptible than adults — is amplified by platforms that present curated, filtered, and often digitally altered images as normative. Eating disorders, body dysmorphia, and appearance-related anxiety have all been linked to intensive social media use in adolescent populations, particularly among girls.

The Human Cost

Among teenagers who reported experiencing suicidal thoughts, 13% of British users and 6% of American users traced the origin of those thoughts to Instagram. Facebook’s own researchers produced these findings in 2019. The company did not make them public. They were disclosed only through Frances Haugen’s whistleblower testimony two years later [4] [14]. When a company’s own research links its product to suicidal ideation in children, and that company suppresses the research to protect its growth metrics, the word “negligence” understates the situation.

Sleep disruption represents another well-established pathway. Notifications arriving at night — 5% of the daily average of 237+, according to Common Sense Media [10] — interrupt sleep architecture in ways that compound adolescent vulnerability. The blue light emitted by screens suppresses melatonin production, but the psychological stimulus of social media content is the more significant disruptor. Adolescents who check social media within an hour of bedtime report significantly lower sleep quality, which in turn exacerbates depression, anxiety, and cognitive impairment. The platforms do not pause for bedtime. They are designed not to.

The developmental asymmetry is the most critical and least discussed factor. The prefrontal cortex — responsible for impulse control, risk assessment, and long-term planning — does not fully mature until approximately age twenty-five. Adolescents are neurologically less equipped to resist compulsive design patterns than adults. They are also more sensitive to social reward and social rejection, making the variable ratio reinforcement of likes, comments, and followers more neurologically potent for teenagers than for any other demographic. The platforms' own data confirms this: TikTok's internal documents explicitly note that younger users produce better engagement metrics [7]. The design does not account for developmental vulnerability because accounting for it would reduce engagement. And reducing engagement would reduce revenue. ⚖ Contested The causation debate — explored in Section 07 — is legitimate. But it should not obscure the more fundamental question: whether it is acceptable for a trillion-dollar industry to deploy engagement-maximising systems against developing brains while the scientific community resolves methodological disagreements.

Generation Z now averages 9 hours of screen time per day — substantially above the global average of 6 hours and 38 minutes [2]. This is not incidental. It is the designed outcome of platforms optimised to maximise time-on-platform. The adolescent emergency is not a failure of parenting. It is a success of engineering.

06

The Regulatory Response
What governments are doing — and why it is not enough

Regulatory action against the attention economy is accelerating across multiple jurisdictions — from the EU’s Digital Services Act to Australia’s under-16 social media ban [6] [12]. But the regulatory response remains structurally outmatched by the economic incentives driving compulsive design. ✓ Established

The European Union's Digital Services Act (DSA), which entered full application in February 2024, represents the most comprehensive regulatory framework currently in force. By February 2026 — two years after implementation — the European Commission had imposed its first major fine: €120 million against X (formerly Twitter) for deceptive design practices that violated the DSA’s provisions on dark patterns and transparency [6]. ✓ Established TikTok Lite — a stripped-down version of TikTok that offered coin rewards for watching videos — was permanently withdrawn from the EU market following enforcement action [6]. Fourteen investigations into major platforms remain active. The DSA framework is significant because it addresses not just content moderation but platform design itself — specifically the use of dark patterns and engagement-maximising interfaces that exploit psychological vulnerabilities.

Australia took the most dramatic regulatory step of any jurisdiction in November 2024, passing the Online Safety Amendment (Social Media Minimum Age) Act, which banned social media access for children under 16 — with no parental consent exception [12]. ✓ Established The ban took effect in December 2025, with penalties for non-compliant platforms reaching A$49.5 million [12]. Platforms bear the responsibility for age verification, not parents or children. The legislation passed with overwhelming bipartisan support, reflecting broad public concern about the impact of social media on young Australians. However, the ban's long-term effectiveness remains uncertain — enforcement depends on age verification technology that is still maturing, and determined minors may find ways to circumvent verification systems.

In the United States, the Kids Online Safety Act (KOSA) has been advancing through Congress but has not yet become law. The legislation would require platforms to enable the strongest privacy and safety settings by default for users under 17 and would impose a duty of care on platforms to prevent harm to minors. In September 2024, 42 state attorneys general from both parties demanded that Congress require warning labels on social media platforms — a deliberate invocation of the tobacco regulatory precedent [2]. ◈ Strong Evidence The Surgeon General has endorsed the warning label approach. However, the legislative process in the US remains slow relative to the pace of platform evolution, and industry lobbying continues to dilute proposed provisions.

China's approach offers a cautionary tale about enforcement limitations. In 2021, China imposed strict limits on minor gaming — restricting access to one hour per day on weekends and holidays only. The policy was initially hailed as the world's most aggressive intervention against screen time in minors. However, subsequent research revealed a 77% evasion rate, with minors circumventing real-name verification requirements by using relatives' accounts or purchasing account access on secondary markets [6]. ✓ Established The Chinese experience demonstrates that regulatory intent without robust enforcement infrastructure produces compliance theatre rather than behavioural change.

The United Kingdom's Online Safety Act, passed in October 2023, empowered Ofcom as the regulator for online safety and imposed new duties on platforms regarding illegal content, children's access, and risk assessment. France and Spain are leading an EU-wide initiative to establish a minimum social media age of 15, which would complement the DSA's design-focused provisions with an age-based access restriction. Brazil passed social media regulation for minors in 2025, with enforcement beginning in March 2026. The global trend is unmistakable: governments are moving to regulate the attention economy. The question is whether they are moving fast enough, and with sufficient technical sophistication, to outpace the platforms' capacity to adapt.

2021
China imposes minor gaming limits — One hour per day on weekends only. Later revealed: 77% evasion rate through account sharing and identity circumvention.
2023
UK Online Safety Act passed — Ofcom empowered as online safety regulator. New duties on platforms regarding children’s access and illegal content.
2023
EU Digital Services Act takes full effect — Comprehensive framework addressing dark patterns, algorithmic transparency, and platform design — not just content moderation.
2024
Australia passes under-16 social media ban — World’s first national age-based ban. No parental consent exception. Platforms responsible for age verification.
2024
US Surgeon General calls for warning labels — 42 bipartisan state AGs endorse the proposal. Tobacco regulatory parallel deliberately invoked.
2025
EU fines X €120 million — First major DSA enforcement action. X penalised for deceptive design patterns. TikTok Lite permanently withdrawn from EU.
2025
Australia under-16 ban takes effect — Platforms begin implementing age verification. Penalties up to A$49.5 million for non-compliance.
2025
France and Spain lead EU under-15 initiative — Proposal to establish EU-wide minimum social media age of 15, complementing DSA design-focused provisions.
2026
KOSA advances in US Congress — Kids Online Safety Act moves through committee. Would require strongest privacy defaults for users under 17.
2026
Brazil enforcement begins — Brazilian social media regulation for minors enters enforcement phase, joining the growing global regulatory movement.
Regulatory ApproachEffectivenessAssessment
EU DSA enforcement model
High
First major fines imposed but enforcement scales slowly against platform iteration speed. €120M fine represents 0.06% of Meta’s annual revenue — a rounding error, not a deterrent.
Age-based bans (Australia model)
Medium
Strong public support but enforcement technically challenging and risks excluding vulnerable youth communities, particularly LGBTQ+ adolescents who depend on online peer support.
Parental consent frameworks
Medium
Shifts burden to families. Meta’s own internal research shows low adoption of available parental tools. Does not address underlying addictive design mechanics.
Platform design mandates
High
Most structurally effective approach — addresses root cause. But requires technical expertise regulators currently lack; industry will resist aggressively via lobbying.
Warning label approach (US proposal)
Medium
Tobacco parallel is politically compelling but platforms are interactive, not passive products. Labels alone insufficient without structural design changes.

The structural mismatch between regulatory capacity and platform capability is the defining challenge. Platform companies employ thousands of engineers who can iterate on product design in days. Regulatory investigations take months or years. A fine of €120 million — the DSA's first major penalty — represents 0.06% of Meta's annual advertising revenue. This is not a deterrent. It is a cost of doing business. ⚖ Contested The effectiveness of age-based bans remains particularly uncertain: Australia's ban is too new to evaluate, and China's experience suggests that age verification systems are porous. The most structurally promising approach — mandating changes to platform design itself, such as requiring chronological feeds or banning engagement-maximising algorithms for minors — is also the approach that faces the most intense industry opposition. The platforms' lobbying expenditure in the US alone exceeds $100 million per year. The regulators are not merely outgunned. They are outspent, outpaced, and frequently out-expertise.

07

The Causation Debate
What the science actually settles — and what it does not

The most contested question in the attention economy literature is whether social media use causes adolescent mental health decline or merely correlates with it. The debate is methodologically legitimate — and strategically exploited by an industry that benefits from perpetual uncertainty. ⚖ Contested

The strongest case for causation has been made by Jonathan Haidt, whose The Anxious Generation presents seven independent lines of evidence converging on the conclusion that social media is a "major cause" of the adolescent mental health crisis [5]. Haidt's evidence base includes correlational studies showing temporal coincidence between smartphone adoption and mental health decline, longitudinal studies establishing temporal precedence (social media use predicts later depression, not the reverse), experimental studies demonstrating mood effects from platform exposure, and internal company data confirming platforms' own awareness of harm. The cumulative case, Haidt argues, is overwhelming — comparable in strength to the evidence linking smoking to lung cancer in the 1960s.

The strongest case against causation comes from Andrew Przybylski at the Oxford Internet Institute, whose 2024 study — one of the largest ever conducted on the topic — analysed data from more than two million people across 168 countries [9]. The study found "only minor shifts in global mental health over two decades of increasing online connectivity" and "no consistent evidence linking Facebook adoption to negative well-being" [9]. Przybylski and his co-authors argue that the effect sizes reported in most social media harm studies are small to moderate, that cross-sectional study designs dominate the literature (making causal inference inappropriate), that self-reported screen time data is unreliable, and that confounding variables — including poverty, family instability, academic pressure, and the COVID-19 pandemic — have been insufficiently controlled in most analyses.

The Case for Causation

Longitudinal studies show temporal precedence
Social media use at Time 1 predicts depression at Time 2, not the reverse. This pattern appears across multiple independent datasets and age groups.
Internal company data confirms awareness of harm
Meta’s own research found 32% of teen girls said Instagram worsened body image. TikTok identified the 260-video habit threshold. Companies knew and did not act.
Experimental studies show mood effects
Randomised controlled trials in which participants reduce or eliminate social media use consistently show improvements in mood, sleep quality, and self-reported well-being.
Seven independent lines of evidence converge
Haidt identifies correlational, longitudinal, experimental, internal company, quasi-experimental, neuroscience, and demographic evidence all pointing in the same direction.
Dose-response relationship established
CDC data shows graded increase in depression risk (aOR 1.61), anxiety (aOR 1.45), and ADHD (aOR 1.21) as screen time increases beyond four hours per day.

The Case for Caution

Effect sizes are small to moderate
Most studies report effect sizes comparable to eating potatoes or wearing glasses. Critics argue these are too small to justify population-level alarm or regulatory intervention.
Largest study (2M+ people) finds minimal association
Przybylski’s Oxford study of 2+ million people across 168 countries found only minor shifts in mental health over two decades of increasing connectivity.
Cross-sectional designs dominate the literature
Most studies measure social media use and mental health at a single point in time, making it impossible to determine whether social media causes harm or distressed individuals use more social media.
Self-report bias in screen time measurement
Studies relying on self-reported screen time are unreliable. Objective measurement studies show individuals routinely overestimate or underestimate their actual usage by 30-50%.
Confounding variables insufficiently controlled
Poverty, family instability, academic pressure, COVID-19, economic inequality, and reduced access to mental health services all coincide temporally with social media adoption.

The APA's advisory occupies a deliberately measured middle ground: "Using social media is not inherently beneficial or harmful to young people" [3]. The effects, the APA concludes, depend on individual factors (age, developmental stage, pre-existing mental health conditions), environmental factors (parental involvement, school context, socioeconomic status), and use patterns (passive scrolling versus active creation, exposure to harmful content versus supportive communities). This nuance is scientifically appropriate. It is also, in the context of the attention economy, strategically irrelevant — because the platforms are not designed for moderate, purposeful, contextually appropriate use. They are designed for maximal engagement regardless of user characteristics.

Using social media is not inherently beneficial or harmful to young people. The effects depend on individual and environmental factors, and on the types of content and features they are exposed to.

— American Psychological Association, Health Advisory on Social Media Use in Adolescence, May 2023

The methodological limitations on both sides of the debate are real. Haidt's critics correctly note that correlational evidence cannot establish causation, that effect sizes are often modest, and that the "Great Rewiring" thesis overstates the homogeneity of a highly diverse set of platforms, use patterns, and cultural contexts. Przybylski's critics correctly note that population-level analyses can mask subgroup effects (a finding of "no average harm" can coexist with severe harm to vulnerable subpopulations), that the Oxford study's ecological design cannot detect individual-level causal pathways, and that the absence of evidence is not evidence of absence. ⚖ Contested

But the most important observation about the causation debate is not methodological. It is strategic. The tobacco industry sustained a causation debate about smoking and lung cancer for decades — not because the evidence was genuinely ambiguous, but because ambiguity served the industry's commercial interests. Every year the debate continued was a year in which regulation was delayed. The attention economy's relationship to the causation debate follows an identical structure. Platforms fund research that emphasises uncertainty. They amplify findings that cast doubt on harm. They invoke the complexity of the science as a reason for regulatory caution. The debate is real. The exploitation of the debate is also real. Both facts can coexist.

The framing itself contains a logical error that deserves attention. You do not need to prove that slot machines cause gambling addiction to regulate their placement in primary schools. You do not need to prove that cigarettes cause lung cancer to prohibit their sale to children. The standard for regulating an industry's access to minors is not "proof of causation beyond all methodological dispute." It is the precautionary principle: where there is credible evidence of harm, and where the population at risk has diminished capacity for self-protection, the burden of proof falls on the entity deploying the product, not on the children exposed to it. By this standard, the evidence is not merely sufficient. It is overwhelming.

08

What the Evidence Actually Tells Us
The engineering problem, not the parenting problem

The central misinformation of the attention economy is the framing itself. Compulsive social media use is presented as a parenting problem — a failure of willpower, discipline, or family oversight. The evidence tells a different story. It is an engineering problem, backed by a $1.17 trillion business model, and the framing that obscures this is not accidental [1] [11]. ◈ Strong Evidence

The structural asymmetry is the single most important fact in this entire report. On one side: a $1.17 trillion industry employing tens of thousands of engineers, behavioural scientists, and data analysts, deploying the most sophisticated persuasion technology ever constructed, optimised through millions of A/B tests on billions of users, backed by virtually unlimited capital, and operating under a business model that generates more revenue the more compulsive the product becomes [1]. On the other side: individual users — including children as young as ten — equipped with consumer-grade screen time tools that studies show produce at best a 20-30% reduction in usage, and parents who are themselves targets of the same engagement-maximising systems. This is not a fair contest. It is not designed to be.

The Center for Humane Technology, founded in 2018 by Tristan Harris (former Google design ethicist) and Aza Raskin (the inventor of infinite scroll), represents the most prominent organisational counterweight to the attention economy [11]. The organisation has achieved meaningful impact — its advocacy contributed to product changes at Facebook, Apple, and Google, and its public campaigns have shaped regulatory discourse in the US and EU. Harris's appearance in the Netflix documentary The Social Dilemma reached tens of millions of viewers and significantly raised public awareness of attention engineering. But the structural disparity remains: the Center for Humane Technology operates on an annual budget of approximately $10 million. The industry it opposes generates $1.17 trillion.

The Structural Asymmetry

The attention economy generates $1.17 trillion per year. The Center for Humane Technology — the most prominent counter-organisation — operates on a budget of approximately $10 million. This is the ratio: 117,000 to 1. The framing of compulsive platform use as a parenting problem is not an observation. It is a strategy. It shifts the burden of resisting trillion-dollar engagement systems from the corporations that designed them to the families they target. Every time the discourse centres on “screen time limits” rather than “engagement-maximising algorithms,” the industry wins.

Screen time management tools — Apple's Screen Time, Google's Digital Wellbeing, Instagram's "Take a Break" reminders — represent awareness tools, not structural solutions. They alert users to their own behaviour without changing the environment that shapes it. This is the equivalent of installing a speedometer in a car with no brakes. The user can observe how fast they are going. They cannot change the road design that encourages speeding. Studies of screen time tools consistently show modest short-term reductions in usage (typically 20-30%), followed by gradual return to baseline as users learn to dismiss or circumvent the tools [15]. The platforms that build these tools understand their limitations. They build them because the tools serve a public relations function, not a harm-reduction function.

Ethical alternative platforms demonstrate that different design choices are technically possible. Bluesky, Mastodon, and other decentralised social media platforms offer chronological feeds, user-controlled algorithms, and business models not dependent on advertising revenue. These platforms prove that social networking does not require engagement-maximising design. However, they also demonstrate why the attention economy is self-reinforcing: engagement-maximising design is not merely a feature of major platforms — it is their competitive advantage. Platforms that optimise for engagement attract more users, generate more data, sell more advertising, and invest more in further engagement optimisation. Ethical alternatives cannot compete with this feedback loop at scale. The problem is not that better design is impossible. It is that the market rewards worse design.

Structural change would require interventions that address the business model itself, not merely its symptoms. The most commonly proposed structural reforms include: banning engagement-maximising algorithms for users under 18, requiring chronological feeds as the default setting for all users (with algorithmic feeds available as an opt-in), mandating interoperability to reduce network effects and enable competition, imposing fiduciary duties on platforms to act in users' interests, and taxing attention-based advertising revenue to fund digital literacy and mental health programmes. Each of these proposals would directly reduce the revenue generated by compulsive design. This is why each faces intense industry opposition. The proposals are structurally effective precisely because they threaten the structural incentives that produce harm.

The evidence presented in this report does not prove that social media is the sole cause of adolescent mental health decline. The science is more complex than that, and responsible analysis must acknowledge methodological limitations and contested findings. But the evidence does establish — beyond any reasonable dispute — the following: platforms are deliberately engineered for compulsive use; internal documents prove platforms knew their products caused harm to adolescents; the regulatory response, while accelerating, remains structurally outmatched by industry resources and adaptation speed; and the framing of attention capture as a personal responsibility problem serves the commercial interests of the industry that designed the product.

The question is not whether individuals should exercise prudence in their use of social media. Of course they should. The question is whether individual prudence is a sufficient response to a trillion-dollar industry designed to overcome it. The evidence on that question is not ambiguous. Individual prudence is necessary. It is not sufficient. The attention economy is an engineering problem. It requires an engineering solution — backed by regulatory force, informed by the science, and proportionate to the $1.17 trillion business model that sustains it.

The platforms know this. Their own engineers, in their own internal communications, using their own data, have said as much. The only remaining question is whether the political will exists to act on what is already known — or whether the causation debate will serve, as it served for tobacco, as a multi-decade delay tactic while the extraction continues.

SRC

Primary Sources

All factual claims in this report are sourced to specific, verifiable publications. Projections are clearly distinguished from empirical findings.

Cite This Report

APA
OsakaWire Intelligence. (2026, March 26). The Attention Economy — Designed Extraction. Retrieved from https://osakawire.com/en/the-attention-economy-designed-extraction/
CHICAGO
OsakaWire Intelligence. "The Attention Economy — Designed Extraction." OsakaWire. March 26, 2026. https://osakawire.com/en/the-attention-economy-designed-extraction/
PLAIN
"The Attention Economy — Designed Extraction" — OsakaWire Intelligence, 26 March 2026. osakawire.com/en/the-attention-economy-designed-extraction/

Embed This Report

<blockquote class="ow-embed" cite="https://osakawire.com/en/the-attention-economy-designed-extraction/" data-lang="en">
  <p>Social media platforms generate $1.17 trillion in annual advertising revenue by engineering compulsive engagement. Internal documents from Meta, TikTok, and Google reveal deliberate design choices that exploit psychological vulnerabilities, particularly in adolescents. The regulatory response remains structurally outmatched by the economic incentives driving compulsive design.</p>
  <footer>— <cite><a href="https://osakawire.com/en/the-attention-economy-designed-extraction/">OsakaWire Intelligence · The Attention Economy — Designed Extraction</a></cite></footer>
</blockquote>
<script async src="https://osakawire.com/embed.js"></script>