Whatever Happens with AI and Jobs, You Need to Be Able to See It
Scott Galloway says the AI job apocalypse is a marketing strategy. He’s probably right. But here’s why that doesn’t let companies off the hook.
Scott Galloway published a piece last week (Apocalyse No) that’s worth reading — not because it tells you something radically new, but because it says something most people are afraid to say out loud: the AI job apocalypse narrative is, in large part, engineered fear. The people predicting mass unemployment at the loudest volume happen to be the same people who profit when you believe it.
His argument is sharp. Anthropic’s CEO warns of a “white-collar bloodbath.” Elon Musk says no job will be needed. Sam Altman writes that labor will fall toward zero cost. And yet, as Galloway points out, tech employment has been remarkably flat. The Oracle and Meta layoffs that fueled headlines were mostly companies returning to pre-pandemic headcount — not AI replacing humans at scale. The data, so far, doesn’t match the narrative.
Galloway is skeptical of apocalyptic thinking, and history is on his side. Every major technological shift — the printing press, the Industrial Revolution, the spreadsheet, the internet, robotic process automation — triggered waves of “this time it’s different” panic. It rarely was. Automation eliminates tasks. It also creates demand for entirely new ones.
We think he’s right to push back on the fear-mongering. But there’s a problem most companies haven’t fully reckoned with, and it doesn’t go away just because the apocalypse isn’t coming.
Three Scenarios. All of Them Uncertain.
Galloway maps out three plausible futures for AI and the labor market, and he’s candid that nobody knows which one we’re heading into.
Scenario 1: The Bubble Bursts. AI investment is wildly concentrated — the “Mag 10” account for 40% of the S&P’s market cap. If the hype outpaces the returns, a correction could look a lot like a jobs crisis even if AI wasn’t actually the cause. Companies would cut, blame AI, and the narrative would become self-fulfilling.
Scenario 2: Jevon’s Paradox. AI delivers on its promises, but on a timeline the economy can adapt to. Cheaper execution creates new demand. Roles evolve. Programmers become architects. Accountants multiply as the cost of analysis drops. This is the optimistic scenario, and there’s real historical precedent for it.
Scenario 3: Disruption Outpaces Recovery. AI hits every sector simultaneously, faster than workers or policymakers can respond. Not necessarily mass unemployment — but persistent economic anxiety, hollowed-out roles, and a growing gap between those who benefit from AI and those who don’t.
Which scenario are you in? Right now, most companies genuinely cannot answer that question. And that’s the problem.
Galloway’s Critique Applies Inward, Too
Galloway’s central point is that the apocalypse narrative isn’t data-driven — it’s narrative-driven, shaped by people with something to gain from the fear. That’s a fair and important observation. But here’s the uncomfortable mirror image of that argument: most companies’ internal claims about AI’s impact aren’t data-driven either.
When a leadership team says “AI is making us more productive,” what’s that based on? When an executive reports that the workforce is “adapting well,” how is that being measured? When a board asks what AI is actually doing to headcount, workflows, and operating costs — what’s the honest answer?
For most organizations today, the answer is a combination of anecdote, assumption, and vendor-supplied metrics that are almost entirely self-reported. That’s not accountability. That’s a different kind of narrative.
The Measurement Gap Is the Real Risk
Whether Scenario 1, 2, or 3 plays out, companies that lack objective visibility into how AI is changing their workforce and operations will be caught flat-footed.
In a bubble correction, they won’t be able to distinguish AI-driven efficiency from cost-cutting theater. In the Jevon’s paradox scenario, they won’t know where productivity gains are actually materializing — or which teams are being stretched thin while others sit underutilized. And in the worst-case scenario, they won’t have the data to respond thoughtfully, protect the right roles, or make the case to their employees and boards that they saw it coming and acted responsibly.
The companies that will navigate this transition well — in any scenario — are the ones that can answer some very basic questions with real data:
- How is AI adoption actually spreading across our workforce, and where is it stalling?
- Are the teams using AI tools showing measurable changes in output, focus, or capacity?
- Which processes are being affected, and which are staying the same?
- Where are we seeing genuine efficiency gains, and where are we just shifting work around?
These aren’t hypothetical questions. They’re the questions your board, your investors, and your employees are already starting to ask. The only difference is whether you’ll be able to answer them.
What Objective Measurement Actually Looks Like
At Sapience, we work with organizations that want to move beyond anecdote and into evidence when it comes to understanding how their workforce operates — and how it’s changing.
That means looking at behavioral and activity data at the work level: how time and effort are distributed across tools, tasks, and processes; where capacity is being absorbed or freed up; and how patterns shift over time. It’s not surveillance — it’s about giving leaders a clear, honest picture of what’s actually happening on the ground, so they can make decisions based on reality rather than assumption.
When AI enters the picture, that kind of visibility becomes even more valuable. You can start to see where AI tools are being adopted and what difference they’re actually making. You can track whether promised productivity gains are showing up in the data. And you can identify where the workforce is absorbing AI’s output without any corresponding recognition, capacity relief, or reinvestment.
That’s not panic. That’s stewardship.
The Bottom Line
Scott Galloway is probably right that the apocalypse isn’t coming. The history of technology and labor is mostly a story of adaptation, not extinction.
But responsible leadership doesn’t mean ignoring the transition. It means having the visibility to understand what’s actually happening, report on it honestly, and respond with intention.
Whatever scenario unfolds over the next three to five years, the companies that will come through it strongest aren’t the ones that panicked — or the ones that assumed everything was fine. They’re the ones that could see clearly.
Want to understand what AI is actually doing to your workforce and workflows? Reach out to the Sapience team to learn how we help organizations measure what matters.