SERIES: WHAT IS ACTUALLY HAPPENING
A sourced, calibrated analysis of what AI is doing to work — separated by what is established fact, what is contested, and what is myth. Neither panic nor reassurance. Evidence.
Three distinct camps are shouting past each other. None is entirely right. Understanding the structure of the debate is the prerequisite to understanding the evidence.
Few topics generate more confident, contradictory claims per column inch than artificial intelligence and jobs. In a single week, credible outlets will publish "AI is coming for half of all white-collar jobs" and "AI creates more jobs than it destroys — history proves it." Both headlines are technically defensible. Neither is the full story.
The confusion is structural, not accidental. It arises from three distinct debates being collapsed into one: what is happening now (empirical, measurable), what will happen by 2030 (contested projection), and what will happen over 20–50 years (genuinely unknown). Researchers who are pessimistic about long-run structural employment and researchers who are optimistic about near-term aggregate job creation are sometimes citing different time horizons — and both can be correct simultaneously.
There is also a political economy to the optimism. Companies building AI have a vested interest in the narrative that their technology creates more jobs than it destroys. Unions and displaced workers have a vested interest in documenting harm. Neither group is necessarily dishonest — both are applying selection pressure to which evidence gets amplified.
This report attempts to be useful by being disciplined: every claim is explicitly categorised by evidence tier. Where the data is clear, we say so. Where it is contested, we say so and present both credible sides. Where claims are not supported by evidence — regardless of which camp makes them — we say that too.
Separating the numbers that are well-established from the projections that are contested — and being honest about the difference.
Goldman Sachs Research (2025) documents a measurable, statistically significant drop in employment specifically among workers aged 22–25 in AI-exposed occupations. Unemployment among 20–30 year-olds in tech-exposed roles rose by ~3 percentage points since early 2025. Software developers aged 22–25 saw a ~20% drop in employment compared to their late-2022 peak. This is corroborated by Stanford Digital Economy Lab's "Canaries in the Coal Mine" (Brynjolfsson et al., 2025), which identified the same cohort as the leading indicator of AI labour market impact.
Importantly: overall employment continues to rise. This is not a macro employment collapse — it is a targeted compression of the entry-level hiring pipeline in specific AI-exposed roles. The signal is concentrated in who gets hired to start their career, not in mass layoffs.
Multiple high-quality randomised experiments confirm that AI tools genuinely raise productivity. Brynjolfsson, Li & Raymond (2025, Quarterly Journal of Economics) — the most rigorous study available — found that generative AI raised customer support worker productivity by an average of 14%, with the largest gains for lowest-skilled workers. GitHub Copilot studies show coding speed improvements of ~55% for developers. The key finding is that AI currently functions as a productivity amplifier, not a replacement, for most active workers. The replacement dynamic is manifesting in hiring — fewer new people needed — rather than in layoffs of existing workers.
Labour's share of income in US nonfarm businesses fell from ~64% in 1980 to ~57% in 2017 (Acemoglu, Manera & Restrepo, 2020). This pre-AI trend reflects decades of capital-biased automation, where the effective tax rate on labour (~25–34%) far exceeds that on capital (~5–10%), incentivising substitution. IMF research (2024–2025) projects that AI is likely to further increase returns to capital at the expense of labour income — but this effect depends heavily on whether AI complements or substitutes high-income workers, and the degree to which productivity gains are captured by capital owners versus distributed as wages.
"The most widespread impact of generative AI is likely to be on job quality rather than job quantity."
— International Labour Organization (ILO/NASK Global Index, May 2025)Risk level, exposure mechanism, and projected timeline — sorted by occupational category. Risk percentages are model-based estimates (see Section 02 caveat).
| Occupation | Task Exposure | Primary Mechanism | Timeline | Evidence |
|---|---|---|---|---|
| Customer Service Representatives | LLM chatbots handle Tier-1 inquiries; human roles shrink to exception handling | 2024–2026 (already active) | ✓ Established | |
| Data Entry Clerks | Direct automation of repetitive data processing; high-accuracy OCR + AI | 2024–2027 | ✓ Established | |
| Administrative / Secretarial Assistants | Scheduling, drafting, document management, email — all AI-replicable | 2024–2028 | ◈ Strong Evidence | |
| Translators / Interpreters | LLMs now at near-human quality for standard commercial translation; documented employment decline | 2023–2026 (ongoing) | ✓ Established | |
| Bookkeepers / Accounting Clerks | Routine financial data processing fully automatable; AI accounting software scaling fast | 2025–2028 | ◈ Strong Evidence | |
| Proofreaders / Copy Editors | Grammar/style correction tasks now performed by AI at superior accuracy for standard content | 2023–2026 | ◈ Strong Evidence |
Sources: ILO/NASK Global Index 2025; Goldman Sachs Research 2025; McKinsey Global Institute; IMF SDN 2024
| Occupation | Task Exposure | Primary Mechanism | Timeline | Evidence |
|---|---|---|---|---|
| Junior Software Developers | Code generation tools reduce entry-level role requirements; senior roles become more productive | 2024–2028 | ✓ Established | |
| Paralegals / Legal Assistants | Document review, legal research, contract analysis — all LLM-replicable at speed | 2025–2029 | ◈ Strong Evidence | |
| Financial Analysts (entry level) | Routine analysis, report generation, data synthesis now automated; senior judgment retained | 2025–2030 | ◈ Strong Evidence | |
| Journalists / Content Writers | Data journalism and standardised content generation automated; investigative/narrative less so | 2024–2028 | ◈ Strong Evidence | |
| Radiologists (screening layer) | AI outperforms humans on initial image screening; role shifts toward complex diagnosis, communication | 2026–2032 | ⚖ Contested | |
| Retail Cashiers | Self-checkout + frictionless retail scaling; Amazon Go model expanding | 2024–2030 | ◈ Strong Evidence |
| Occupation | Task Exposure | Primary Mechanism | Notes | Evidence |
|---|---|---|---|---|
| Teachers / Educators | Administrative tasks + standardised content generation automated; core instruction and mentoring resilient | Role transformation likely; volume reduction unlikely short-term | ⚖ Contested | |
| Accountants / Auditors (senior) | Routine elements automatable; complex judgment, client relationship, liability-facing work resilient | Bifurcation: junior roles squeezed, senior roles amplified | ◈ Strong Evidence | |
| Marketing Specialists | Content creation, A/B testing, campaign analysis rapidly automating; creative strategy less so | Productivity amplifier currently; displacement coming at entry level | ◈ Strong Evidence | |
| Human Resources | Screening, scheduling, admin automated; culture, conflict resolution, judgment-heavy work not | ATS (applicant tracking systems) already AI-driven | ◈ Strong Evidence | |
| Truck / Delivery Drivers | Autonomous vehicles technically approaching viability; regulatory, insurance, last-mile delays remain | High volume of affected workers (3.5M in US alone); timelines consistently delayed | ⚖ Contested |
| Occupation | Task Exposure | Why Resilient | Risk Level | Evidence |
|---|---|---|---|---|
| Plumbers / Electricians / Skilled Trades | Requires physical dexterity in unstructured environments; robots cannot yet perform reliably or cost-effectively | Low — 10–20 year horizon minimum | ✓ Established | |
| Registered Nurses | Physical care, patient communication, emotional labour, clinical judgment in unstructured situations | Low for displacement; high for productivity augmentation (AI diagnostics) | ✓ Established | |
| Mental Health Therapists | Therapeutic relationship, empathy, nuanced human judgement; AI tools as supplements, not replacements | Low — regulatory and ethical barriers high even if AI improves | ◈ Strong Evidence | |
| Early Childhood Educators | Physical care, relationship formation, developmental monitoring — not replicable by AI | Very low | ✓ Established | |
| Senior Executives / CEOs | Strategic judgment, relationship capital, accountability, ambiguous decision-making in novel situations | Low — but AI will amplify productivity of those who adopt it | ◈ Strong Evidence | |
| Construction Workers | Physical manipulation in unstructured, variable environments; robotics not yet viable at scale | Low for 10+ years; potentially higher 2030–2040 | ✓ Established |
The aggregate numbers obscure radically different experiences across gender, age, education, and income. The same "AI creates more jobs" headline can be simultaneously true in the aggregate and catastrophic for specific populations.
The structural reason: Between 93–97% of secretary and administrative assistant positions in the US were held by women between 2000–2019 (US Census Bureau). These are Tier-1 occupations for AI displacement. The ILO finds that the overrepresentation of women in clerical and administrative roles is the primary driver of the gender gap in AI exposure — not any inherent characteristic of women's work being uniquely automatable.
The compounding problem: Women are not only concentrated in higher-risk jobs — they are adopting AI tools at lower rates, making them less likely to shift from "AI replaces me" to "AI amplifies me." Research suggests women face additional social penalties for using AI tools (concerns about being perceived as "cheating" or less intelligent) that men do not face to the same degree.
The bias layer: AI systems trained on historical data reproduce and can amplify existing gender biases in recruitment, pay decisions, and credit scoring — creating risk in both the jobs lost and the jobs applied for. The ILO notes that women are underrepresented in AI development (only 22% of AI professionals globally per WEF 2025), making self-correction through diverse development teams structurally difficult.
The entry-level compression: The clearest and most documented age effect is the compression of entry-level hiring. AI is reducing the need for junior workers precisely in the roles that traditionally served as the first rung of professional career ladders: junior developer, junior analyst, junior paralegal, customer service rep. The pipeline to senior roles is narrowing before those roles are themselves threatened.
The irony for Gen Z: The generation most worried about AI is not the generation losing jobs in the aggregate — overall employment is not crashing. They are the generation finding the door to their career ladder narrower than it was for previous cohorts. This is a real harm even if the macro numbers look fine.
Older displaced workers: Workers aged 50+ who lose AI-exposed jobs face the most severe transition challenges. The Boston Fed's research (December 2024) found that about 21% of surveyed workers expected AI to worsen their financial situation within 5 years, concentrated heavily in this older cohort. Retraining for new sectors is harder, takes longer, and has lower returns at this life stage — this group is the one identified by Brookings as most vulnerable.
IMF Working Paper (Rockall, Tavares, Pizzinelli, 2025) distinguishes between three occupational groups: HELC (High Exposure, Low Complementarity — the danger zone), HEHC (High Exposure, High Complementarity — the amplified zone), and LE (Low Exposure — largely unaffected). The critical policy question is which workers fall where.
Low education (no college degree): Lower immediate exposure to AI (IMF: 26% for low-income country workers vs 60% for advanced economies), but also lower capacity to transition to AI-complementary roles. The "protection" of not being in the AI economy's crosshairs is partly an artefact of not yet having access to the digital infrastructure that enables both the risk and the opportunity.
College degree holders: 44% acknowledge AI can perform some of their tasks (vs 22% without college) — higher awareness, but also higher adaptive capacity. Studies confirm workers with post-secondary education experience AI more as a complement to their capabilities than a substitute.
The Brookings Institution confirmed: "Better-paid, better-educated workers face the most exposure." But exposure does not mean harm if complementarity is high. The real danger is the layer of workers with enough education to be in AI-exposed roles but without the seniority, adaptability, or resources to pivot to complementarity.
The class dimension of AI employment impact is the most politically combustible and analytically contested aspect of this topic. It requires careful separation of two different dynamics operating simultaneously.
The most common rebuttal to AI displacement fears is that new jobs will emerge, as they have in previous technological transitions. In wealthy countries, this is a contested but plausible case. In the developing world, it is much harder to make.
India's paradox: India aspires to become a major AI hub, with its AI market projected to grow at 25–35% CAGR by 2027. Yet India's ~$250 billion IT and business services sector — which employs millions in English-speaking, outsourced cognitive roles — is precisely the sector most exposed to AI automation from Western corporations cutting costs. The workers who benefit from India's AI ambitions and those who lose jobs to it are entirely different populations, separated by education, language, location, and income.
The data annotation trap: A significant share of "new jobs" in AI for the Global South consists of data labelling, content moderation, and AI training work — often paying $1–2.50/hour in Kenya, and similar in Bangladesh and India. These workers are doing the unglamorous labour that makes AI systems function, with minimal protections, no career pathway, and exposure to psychologically harmful content. The UNCTAD warned that AI could reduce the competitive advantage of low-cost labour in developing countries — the one economic lever they have — without creating equivalent alternative opportunity.
IMF research (2024) and ResearchGate analysis (2025) confirm: displacement concentrates in 2024–2027 while job creation spreads across longer timelines. In advanced economies, the institutions, safety nets, and educational systems to manage this transition exist (imperfectly). In developing economies experiencing displacement of outsourced roles, those institutional buffers are absent. The result: developing economies experience displacement without offsetting creation, widening international inequality.
In Latin America: ~25% of jobs in Brazil, Chile, Colombia, Mexico, and Peru exhibit high exposure to AI yet low task complementarity — rendering them highly vulnerable to substitution. For workers in call centres and outsourced services specifically, this risk is characterised as "acute" by ILO researchers.
The most powerful argument for labour market optimism is 200 years of evidence that technology creates more jobs than it destroys. This argument deserves serious engagement — and serious scrutiny.
The standard economic rebuttal: The "lump of labour fallacy" — the mistaken belief that there is a fixed amount of work to be done — is a real logical error. New technologies create new demand, new industries, new occupations we cannot predict in advance. Federal Reserve Governor Barr (May 2025): economists have long been sceptical of the assumption that automation leads to permanent unemployment.
The Acemoglu counterpunch (from MIT/IMF, December 2023): "There is no guarantee that, on its current path, AI will generate more jobs than it destroys." The historical pattern of new job creation relied on a balance between automation and new task creation. Sometime after approximately 1970, this balance was lost. Labour's share of income has been falling for 50 years. New task creation has slowed, particularly for workers without four-year college degrees. AI may accelerate an already-broken dynamic, not reverse it.
The speed argument: Historical transitions took generations. The loom displaced weavers over 50–100 years; workers' children adapted. AI is potentially compressing equivalent transitions to 5–10 years. Even if the long-run outcome is net positive, transition costs measured in human lives — income loss, psychological distress, family disruption — are real and concentrated in specific populations who cannot simply "wait for the new jobs."
The term "General Purpose Technology" (GPT in the economic sense) refers to technologies that reshape multiple sectors simultaneously — electricity, computing, the internet. Deming and Summers (2025) concluded that AI qualifies as a GPT of this magnitude.
What is arguably different about AI vs previous GPTs:
1. Previous GPTs automated physical or narrow cognitive tasks. AI is the first technology capable of performing general reasoning, language, and creative tasks — the work previously considered uniquely human and automation-proof. 2. Previous GPTs created new tasks that required human labour to perform. The new tasks AI creates (AI trainer, AI ethics officer, AI product manager) require far fewer workers relative to the tasks they replace. Prompt engineers — once predicted to be a large occupation — comprise less than 0.5% of LinkedIn job postings. 3. The capital-to-labour substitution incentive is structurally embedded in the US tax code (labour taxed at ~30%, capital at ~8%), making replacement the rational choice for any corporate actor.
"The US economy had 2.5 industrial robots per thousand workers in manufacturing in 1993. This number rose to 20 by 2019. Excessive automation has caused a decline in labour's share of income from 64% in 1980 to 57% in 2017."
— Acemoglu, Manera & Restrepo, cited in Chicago Booth ReviewAI is raising productivity in AI-exposed sectors. This is well-evidenced and not seriously disputed. The crucial, contested question is whether those productivity gains translate into broader prosperity or concentrate further at the top.
The productivity evidence is real. Randomised controlled experiments — the gold standard of social science — confirm AI tools raise output in professional settings. The question is not whether productivity increases, but who captures that increase.
Acemoglu and Johnson (Power and Progress, 2023) introduce the concept of the "productivity bandwagon": the idea that for the majority of people to benefit from productivity growth, that productivity must be "anchored" to improved efficiency of human labour — raising workers' marginal productivity — rather than simply automating human tasks and capturing the gains as capital income.
The EPI (Economic Policy Institute) analysis adds that the effective tax rate on labour is approximately double that on capital in the US, meaning companies are structurally incentivised to substitute capital for labour even when it is not the most economically efficient choice. Brynjolfsson (MIT) recommends equalising effective tax rates on labour and capital as the most direct intervention to change this incentive structure.
The 1990s counter-evidence: EPI research shows the 1990s — which saw massive technology-driven productivity growth from the internet — resulted in broad-based wage growth and declining unemployment, not concentrated gains. The explanation: unemployment was driven sufficiently low to generate genuine bargaining power for workers. The policy lesson is that macro employment conditions matter as much as technology itself for whether productivity gains get distributed.
Both the catastrophist and the dismissive camps produce widely-shared claims that are not supported by the evidence. This section identifies the most common ones on both sides.
Frey & Osborne (Oxford Martin, 2013) produced a highly cited model predicting 47% of US occupations were at high risk. Harvard Data Science Review (Fall 2025) documents that this was a task-level analysis incorrectly extended to whole jobs. The OECD replication applying their own methodology arrived at 9% — five times lower. More critically: the occupations flagged as "at risk" in 2013 (tax preparers, telemarketers, insurance underwriters) have not, in fact, disappeared at scale over the subsequent 12 years. The 47% figure is technically a model output from 2013 projected with significant methodological caveats — presenting it as fact is misinformation.
This Goldman Sachs figure (2023) is frequently misquoted. The original report stated that 300 million full-time job equivalents could be exposed to automation if AI were widely adopted — a task-exposure estimate under an optimistic AI deployment scenario. The same report projected that the most likely displacement scenario is 6–7% of the US workforce, with unemployment rising by just 0.5 percentage points above trend during the transition period, before recovering within approximately two years. The 300M figure is real; presenting it as a near-term mass unemployment forecast is not.
Anthropic CEO Dario Amodei stated in 2025 that AI could eliminate roughly 50% of white-collar entry-level positions within five years. Nvidia CEO Jensen Huang explicitly pushed back. The evidence shows real and documented compression of entry-level hiring in AI-exposed sectors — especially tech. However, "50% of white-collar entry-level jobs" across all industries within 5 years would require adoption speed and scope that current data does not confirm. The underlying concern is legitimate; the specific number and timeline is not well-evidenced.
The historical pattern is real: 60% of today's US jobs didn't exist in 1940. But Acemoglu and Johnson document that new task creation has slowed since 1970, the balance between automation and job creation is already off-kilter, and the speed of AI adoption may compress transitions that historically took generations. The pattern's past validity does not guarantee future validity — particularly when AI is the first technology to threaten general reasoning tasks rather than just specific manual or narrow cognitive ones. The lump-of-labour fallacy is a real economic error; dismissing AI risk entirely by invoking it is also an error.
The evidence on retraining programmes is sobering. "The China Shock" (Autor, Dorn & Hanson, 2016) — the most impactful US economics paper of the last decade — demonstrated that import competition from China devastated large parts of the American workforce, and that retraining programmes largely failed to produce successful transitions. The US Workforce Investment and Opportunity Act (WIOA): as of 2023–24, fewer than 10% of training participants received on-the-job training; just 2% received apprenticeships. The successful retraining examples are rare. Telling displaced 55-year-old manufacturing or clerical workers to "reskill" without addressing structural barriers of cost, time, psychological difficulty, and age discrimination is not a policy — it is a reassurance that fails the most vulnerable workers.
Harvard Data Science Review (Fall 2025) documents: prompt engineers comprise less than 0.5% of a recent sample of advertised jobs on LinkedIn (Vu & Oppenlaender, 2025). The specific "new jobs from AI" predictions that circulated widely in 2022–2023 (prompt engineer, AI ethicist as a mass employer) have largely failed to materialise at the predicted scale. This does not mean no new jobs will emerge from AI — it means specific predictions about which jobs are systematically unreliable, and the total volume of net new jobs created is much harder to predict than the jobs being displaced.
A landscape of what is actually being tried, what the evidence says about each intervention, and the structural gap between the scale of potential disruption and the scale of policy response.
Structural problems require structural solutions. But while waiting for policy, individuals can take actions that the evidence supports. Filtered by your life stage and sector.
If your occupation appears in the "Critical Risk" or "High Risk" tables in Section 03 — or if your role is primarily administrative, data-entry, or routine customer service.
If your occupation appears in the "Resilient" category — skilled trades, healthcare, education, complex professional services.
All factual claims in this report are sourced to specific, verifiable publications. Projections are clearly distinguished from empirical findings.