The NBER just surveyed nearly 6,000 executives across the US, UK, Germany and Australia. 69% of their firms actively use AI. 89% report no measurable productivity impact over three years. 90% report no employment impact. The same executives now forecast a +1.4% productivity lift over the next three years · the same bet that just failed its first run. Both numbers are true. The gap between them is the entire AI productivity story · and almost nobody is measuring it right.
In February 2026 the National Bureau of Economic Research published working paper w34836. Thirteen authors · Stanford's Nicholas Bloom among them · surveyed nearly 6,000 CFOs, CEOs and senior executives across four countries between November 2025 and January 2026. The survey panels were the same ones the US Federal Reserve, the Bank of England, the Deutsche Bundesbank and Macquarie University already use to track macro outcomes. These are not unknown firms. They track GDP.
The headline finding is the one you already feel. 69% of firms actively use AI · 78% in the US, 71% in the UK. More than two-thirds of executives report using AI themselves during working hours. And then the punch: 89% report no measurable impact on labor productivity over the last three years. 90% report no impact on employment.
The same executives · asked the same question in the same breath · forecast a 1.4% productivity gain, a 0.8% output gain and a 0.7% employment cut over the next three years. The same bet that just returned zero is being re-run at higher conviction. Both numbers are true at the same time. The gap between them is where the next decade of AI adoption gets decided.
The realized–forecast gap
What executives say AI did · and what they say it will do next.
Three signals inside the same data
The C-suite admission.
PwC surveyed 4,454 CEOs globally for its 29th Annual CEO Survey (Jan 2026). 56% say they are getting nothing out of AI. Only 10–12% report revenue or cost benefits. PwC Chair Mohamed Kande attributed low returns to a lack of foundational rigor · clean data, solid processes, governance.
The pilot graveyard.
MIT's NANDA initiative published findings in August 2025 that 95% of generative-AI pilots at companies are failing. The headline cause isn't model quality. It's the "learning gap" between tools and the organizations trying to adopt them. A pilot that can't cross into production never reaches the P&L.
Employees disagree.
The same NBER paper asked employees. Executives forecast employment down 0.7% over three years. Employees at the same firms forecast employment up 0.5%. A 1.2-point gap between the people signing the cheques and the people doing the work · and the workers have the better calibration track record on this cycle.
Same story · 10 slides
10 slides · 1920×1080. Full keyboard navigation. Print-to-PDF ready. Share as a LinkedIn carousel or link slide-by-slide.
The throughput diagnosis
The NBER paper is not a claim that AI doesn't work. It's a claim that most organizations are deploying AI at the wrong step in their own process. The engineering term is theory of constraints. The accounting term is throughput accounting. Both converge on the same rule: the speed of the whole system is set by the speed of its single slowest step. A speedup anywhere else is invisible · real, measurable, and invisible. When 89% of firms report zero productivity impact, what they are reporting is that they bought speedups off the constraint and booked them as strategy.
This is why the PwC 56%, the MIT NANDA 95% and the McKinsey 5.5% all converge on the same number range. They are four different instruments pointed at the same phenomenon · adoption without constraint alignment. The 5.5% of firms seeing >5% EBIT from AI are the ones who deployed at the constraint. The other 94.5% deployed at the periphery. Same tool. Same vendor. Same budget line. Different point of application · different result.
The forecast gap · 0% realized, +1.4% predicted · is the tell. Executives have not concluded that AI failed. They have concluded that the deployment strategy failed. The 1.4% forecast is a bet on better application, not better technology. Whether that bet pays depends on whether CFOs start asking the one question they aren't asking yet.
Before you buy, renew, or expand any AI tool · run every deployment through three questions.
- Where is the system's actual constraint? Write down the one step whose speed determines overall output. If your AI pilot isn't at that step, the speed gain can't reach revenue · by construction, not by accident.
- What throughput metric moves if this works? Name the specific P&L line that changes. "Saves time" is not a metric · someone still paid for the seat that saved the time. If you can't name the line, you're about to renew a tool that never mattered.
- What is the cost per active user, not per licensed user? Microsoft Copilot's 36% activation rate turns its €30/month list price into €83/month effective. Re-run the ROI math on every AI seat you own against the activated denominator, not the purchased one.
Adoption without constraint alignment is deployment theatre. The 5.5% who got paid, deployed at the bottleneck.
The AI cycle entered the delivery phase and the delivery didn't show up for nine in ten firms. The escape path isn't more budget · the 2023–26 cohort already tested that. The escape path is constraint alignment. The firms that will reach the forecast +1.4% are the ones that stop deploying AI at the periphery and start deploying it at the single step whose speed decides the quarter. Everyone else is renewing seats that run at 36% activation and calling it a strategy deck.