Berkeley Haas School of Business tracked 200 tech industry employees for nine months and found that AI tools did not reduce workload; instead, they fostered a high-intensity work mode of “multithreaded parallelism,” with cognitive load and burnout risks rising simultaneously.
(Background: Decoding “x402”: Rebuilding trust in AI-era payments, the Holy Grail toward the next generation of machine civilization)
(Additional context: The importance of X402 for stablecoin payments)
Table of Contents
If you’ve read any tech industry investment memos in the past year, you probably saw a similar narrative: AI will significantly boost productivity, allowing employees to do less and produce more, leading to soaring corporate profits, and finally enabling humans to spend time on “more creative” work.
This story sounds beautiful. But the problem is… it might be wrong.
Two scholars from Berkeley Haas, Aruna Ranganathan and Xingqi Maggie Ye, published a nine-month study in Harvard Business Review. From April to December 2025, they tracked 200 employees at an American tech company, observing behavioral changes after integrating AI tools into daily work.
The conclusion is straightforward: AI did not reduce work; AI intensified work.
The research team found that introducing AI tools did not actually decrease task load or shorten working hours. Instead, it created a new work rhythm—“multithreaded parallelism.”
Specifically: employees were simultaneously coding manually while letting AI generate alternative versions; running multiple AI agents to handle different tasks; even revisiting long-shelved projects because “AI can handle background processing anyway.”
On the surface, this appears to boost productivity. The number of concurrent projects increased, and output sped up. But what the researchers observed was:
“Continuous attention switching, frequent checks of AI outputs, and an ever-growing to-do list. This creates cognitive load and a feeling of always juggling— even if the work itself seems productive.”
In other words, employees indeed did more. But they also became more exhausted. And it was that kind of exhaustion—“feeling highly efficient but drained after work”—that set in.
Berkeley’s data comes from within companies, but outside corporate walls, similar patterns are also happening.
Developer Simon Willison shared this study on his personal blog, openly admitting his experiences align closely with the findings. As one of the most active practitioners of large language models (LLMs), Willison has long shared his workflow using AI tools. He notes he can push forward two to three parallel projects simultaneously, completing far more than before.
But the cost is: he exhausts his energy within one to two hours.
He also observed similar patterns among developers around him: some, with a “just one more prompt” mentality, code into the early morning hours, severely impacting sleep quality. It doesn’t feel like overtime; it feels like playing an unsavable game—you know you should stop, but the next round is too tempting.
When academic research and frontline practitioners’ experiences point to the same conclusion, it’s no longer an isolated case but a structural issue.
The key insight from the research team isn’t just that “AI makes people more tired,” but their diagnosis of the cause: organizations lack structured norms for AI use.
Most companies, when adopting AI tools, do the following: purchase licenses, create accounts, attach a PDF of “best practices,” and then expect employees to figure out the optimal way themselves. It’s like installing a turbocharged engine on a bicycle and telling the rider, “Figure it out.”
The researchers recommend: companies need to establish formal AI practice frameworks, clearly defining where AI should be used, where it shouldn’t, and how to distinguish between “genuine efficiency gains” and “just exerting more effort in place.”
Let’s place the research findings into a broader context.
Over the past year, “AI boosting productivity” has been a core logic driving tech stock valuations. From Nvidia to Microsoft, from OpenAI to various AI startups, the entire industry chain’s valuation premise is: AI will increase each knowledge worker’s output by 2 to 10 times, enabling companies to do more with fewer people, with profit margins structurally rising.
But if Berkeley’s research is correct—that AI’s real effect isn’t “doing less” but “doing more, with increased effort but more fatigue”—then this valuation logic needs recalibration.
Productivity gains and increased work intensity are two different things. The former reduces costs and boosts profits; the latter may temporarily increase output but long-term can lead to burnout, higher turnover, and quality decline. If you apply Berkeley’s findings to corporate models, AI might not be increasing profit margins but redistributing labor costs: higher training expenses, increased mental health expenditures, higher personnel replacement costs.
Of course, this doesn’t mean AI has no value. It’s clearly valuable. But its value might not be “making people do less,” but “making people do different things.” And different doesn’t necessarily mean easier.
The study also implies an often overlooked aspect: adaptation period. Willison pointed this out when sharing the study.
Current work norms—how attention is allocated, how performance is measured, how “a day’s work” is defined—have been built over decades. The explosive adoption of AI from 2023 to 2025 is like demanding the entire knowledge economy relearn how to work in just two years.
This re-learning won’t happen automatically. It requires conscious organizational design, management’s cognitive updates, and most importantly: acknowledging that “more output” and “better work” are two entirely different things.
Silicon Valley loves to use “10x engineer” to describe exceptionally productive engineers. AI’s promise is to make everyone a 10x. But this study tells us: perhaps what we get isn’t 10 times the efficiency, but 10 times the fatigue… what do you think?