The Risk of AI-Driven Layoffs: Why Rational Firms Automate Themselves into a Demand Crisis

Commentary on “The AI Layoff Trap” by Hemenway Falk & Tsoukalas (2026)

This blog post is a commentary on the recent working paper “The AI Layoff Trap” by Brett Hemenway Falk and Gerry Tsoukalas.
The paper makes a provocative and deeply uncomfortable claim: even fully rational, forward‑looking firms can collectively automate too much, destroying not only jobs but also their own future demand and profits. This is not a story about myopic managers, hype-driven AI adoption, or social negligence. It is a story about competition itself becoming the problem. [2603.20617v1 | PDF]

In what follows, I summarize the core argument of the paper, highlight why it matters far beyond academia, and offer a practical interpretation for technology leaders, executives, and policymakers navigating large-scale AI deployment today.


The Core Question: If Everyone Sees the Cliff, Why Do Firms Keep Running?

A common counterargument to fears about AI-driven mass layoffs goes like this:

Surely firms will stop before things get too bad. They depend on consumers. They see the risk.

Hemenway Falk and Tsoukalas show why this intuition fails. Their key insight is simple but powerful: each firm captures all the cost savings from automation, but only bears a fraction of the resulting demand loss. The rest of the damage is pushed onto competitors.

Even when every firm understands that layoffs reduce aggregate purchasing power, competition turns automation into a dominant strategy. Each firm rationally automates too much, not because it ignores the consequences, but because it cannot afford not to automate while others do. [2603.20617v1 | PDF]


A Demand-Side Externality, Not a Labor-Market Story

Most public debates about AI and jobs focus on the labor market:

  • Will new tasks appear?
  • Will wages adjust?
  • Can workers reskill fast enough?

This paper deliberately shifts the lens to the product market. Workers are not only employees—they are also consumers. When AI displaces them faster than income is replaced, demand falls across the entire sector. That lost demand hurts all firms, including those that automated.

Crucially, this mechanism:

  • Does not rely on worker misery alone
  • Does not disappear with wage flexibility
  • Does not vanish when AI is genuinely productive

The problem is structural and competitive. [2603.20617v1 | PDF]


Competition Makes It Worse, Not Better

One of the most counterintuitive findings is that more competition increases over-automation.

  • A monopolist fully internalizes the demand loss it creates.
  • In fragmented markets, each firm internalizes only 1/N of that loss.
  • As the number of firms grows, the automation wedge widens.

In the extreme case with frictionless automation, the situation collapses into a Prisoner’s Dilemma:

  • Individually optimal: fully automate
  • Collectively optimal: restrain automation
  • Actual outcome: everyone automates, everyone loses

This flips the traditional narrative that competition disciplines firms in the interest of society. Here, competition destroys the very demand base firms depend on. [2603.20617v1 | PDF]


“Better AI” Accelerates the Trap

Another uncomfortable result: improvements in AI technology do not solve the problem—they amplify it.

As AI becomes cheaper and more capable:

  • Each firm perceives a stronger incentive to automate ahead of rivals
  • Relative gains cancel out in equilibrium
  • Only the demand destruction remains

The paper describes this as a Red Queen effect: firms run faster just to stay in the same place, while collectively moving closer to the cliff. [2603.20617v1 | PDF]


Why Popular Policy Responses Fall Short

The authors systematically test policy ideas that dominate public discourse:

  • Upskilling – helps workers but does not remove the automation incentive
  • Worker equity participation – narrows the gap but does not close it
  • Universal Basic Income – stabilizes consumption levels but leaves firm incentives unchanged
  • Capital income taxes – affect profit levels, not marginal automation decisions
  • Voluntary coordination – fails because automation is a dominant strategy

In other words: none of these address the externality at its source. [2603.20617v1 | PDF]


The Unpopular Conclusion: A Pigouvian Automation Tax

The paper arrives at a conclusion many technologists dislike but economists recognize immediately:
only a Pigouvian tax on automation can implement the cooperative optimum.

Such a tax:

  • Prices in the uninternalized demand loss per automated task
  • Directly targets the margin where the distortion occurs
  • Can fund retraining and income replacement, shrinking the problem over time

Importantly, the argument does not rest on protecting workers for moral reasons alone. Even a planner that places zero weight on worker welfare would still restrain automation—because over-automation reduces firm profits themselves. [2603.20617v1 | PDF]


What This Means for Executives and Technology Leaders

For leaders rolling out AI at scale, the message is sobering:

  • Rational strategy at the firm level can be destructive at the market level
  • “Everyone else is doing it” is not a defense—it is the mechanism
  • Efficiency gains do not automatically translate into sustainable growth

The paper explains why AI adoption discussions increasingly feel unstable: there is no natural stopping point inside competitive markets.


Final Reflection: The Real Risk Is Not AI, but Unpriced Incentives

“The AI Layoff Trap” does not argue against AI. It argues against leaving system-level effects to competition alone.

If the authors are right, then the biggest risk of AI is not mass unemployment per se, but a self-inflicted demand collapse driven by perfectly rational firms.

What is your take in this? Let me know in the comments or on X.

About Post Author

Leave a Reply

Share via
Copy link