You gave your team access to AI tools six months ago. The early adopters dove in – automating reports, drafting documents in half the time, running analyses that used to take days. Productivity metrics ticked upward. Your quarterly review looked strong.
Then you noticed something. Those same early adopters started working later. Taking on tasks outside their remit. Responding to messages at odd hours. They looked busier than ever, despite the tools that were supposed to free them up.
If this sounds familiar, you are watching a pattern that researchers have now documented in detail – and it runs directly counter to the story most organisations tell themselves about AI adoption.
Researchers at UC Berkeley spent eight months embedded in a 200-person technology firm, observing how AI tools changed the way people worked. What they found was not the efficiency dividend everyone expected.
Employees using AI worked faster. They took on a broader range of tasks. They filled previously natural pauses – lunch breaks, gaps between meetings, quiet moments – with AI-prompted activity. The tools made work feel less like work and more like conversation, which sounds positive until you realise what it actually means: the boundaries between working and not working dissolved.
The researchers identified a self-reinforcing cycle. AI makes tasks easier, so workers do more of those tasks. Doing more raises expectations – both their own and their managers’. Higher expectations increase reliance on AI. Greater reliance widens the scope of what feels achievable. And the cycle accelerates.
As one participant put it: “You had thought that maybe you save some time, you can work less. But then you don’t work less. You just work the same amount or even more.”
Here is the counterintuitive finding. The first signs of burnout are appearing not among the sceptics or the reluctant adopters, but among the enthusiasts – the people who leaned in hardest.
This makes psychological sense. The employees most likely to experiment with AI tools are often the most conscientious, the most driven, the most eager to demonstrate value. When a tool expands what they can do, they expand what they attempt. Product managers started writing code. Designers took on engineering tasks. Researchers absorbed work that might previously have justified hiring additional staff.
The result is task expansion without corresponding boundary expansion. People are doing more, across a wider range, with fewer natural stopping points – and nobody explicitly asked them to.
The Berkeley research identified three distinct mechanisms through which AI intensifies rather than reduces work.
Task expansion. Workers step into adjacent roles. The cognitive boost from AI makes it feel possible – and even enjoyable – to take on responsibilities that were previously someone else’s. In the short term, this looks like cross-functional agility. Over months, it looks like one person doing two jobs.
Boundary erosion. Because interacting with AI feels informal – “like chatting,” as participants described it – work seeps into spaces previously reserved for recovery. Prompting an AI during lunch feels different from opening a spreadsheet, but the cognitive load is comparable. The informality masks the intensity.
Escalating multitasking. AI enables workers to manage multiple active threads simultaneously. The feeling of having a capable partner creates momentum, but it also fragments attention. The pace of output rises while the quality of thought risks declining – a trade-off that may not surface in productivity metrics until the damage is done.
Deloitte’s 2026 State of AI in the Enterprise report surveyed over 3,200 senior leaders and found a telling disconnect. Worker access to AI rose by 50% in 2025, and companies are racing to scale. But when asked about the biggest barrier to integration, the answer was not technology – it was insufficient worker skills.
More revealingly, only 33% of organisations are redesigning career paths around AI, and just 30% are measuring worker trust and engagement. The majority are focused on education and upskilling – teaching people to use the tools – while far fewer are asking how the tools are changing the shape of work itself.
A separate study from the University of Chicago and University of Copenhagen found that AI chatbots saved workers roughly one hour per week – while simultaneously creating new tasks that negated those savings.
The pattern is consistent. Organisations are measuring the output side of the equation and ignoring the experience side.
If you work in consulting, none of this should feel entirely new. Technology introductions have always carried the risk of work intensification. Email was supposed to reduce meetings. Smartphones were supposed to create flexibility. Collaboration platforms were supposed to simplify communication.
Each delivered on its promise – and each quietly expanded the territory of work in the process.
What makes AI different is the speed and subtlety of the expansion. Previous tools added communication channels. AI adds capability. When you give someone a faster way to send messages, they send more messages. When you give someone the ability to do work they could not previously do, they do that work – and the organisation absorbs the extra output as the new baseline.
The researchers describe this as work intensification that is “masked by short-term productivity gains.” The gains are real. The cost is deferred. And by the time it surfaces as burnout, weakened decision-making, or turnover, the connection to the tool that started the cycle is no longer obvious.
The Berkeley researchers propose what they call an “AI practice” framework – a set of deliberate interventions to counteract the intensification cycle.
Intentional pauses. Build structured moments where teams step back from the pace that AI enables. Not as wellness theatre, but as a decision-quality measure. Before a major decision, require one counterargument. Before adopting a new AI-enabled workflow, ask what it will displace – not just what it will add.
Sequencing, not just speed. AI makes it possible to accelerate everything simultaneously. Effective teams resist this. They batch notifications, protect focus windows, and advance work in coherent phases rather than continuous streams. The discipline is not in using AI less, but in controlling when work advances.
Human grounding. The most important finding in the research may be the simplest. Teams that maintained regular, non-transactional human connection – genuine check-ins, not status updates – showed greater resilience to the intensification cycle. AI is a solitary amplifier. Without deliberate human grounding, it narrows the perspectives that inform decisions.
Most organisations introducing AI are asking: “How do we get our people to use this effectively?”
The evidence suggests a more useful question: “What are we going to stop doing?”
Every capability AI adds is a potential expansion of workload unless something else contracts. If your team can now produce reports in half the time, the question is not whether they will fill that time with more reports. They will. The question is whether you have decided, deliberately, what the freed capacity is for.
Without that decision, the default is intensification. Not because anyone chose it, but because nobody chose against it.
Your most engaged people are already absorbing the extra load. They are doing it willingly, even enthusiastically. And that is precisely what makes it dangerous.
The burnout will not announce itself as an AI problem. It will look like turnover, like declining quality, like senior people who seem inexplicably tired. By then, the cycle will be well established.
The time to design the boundaries is before you need them – not after your best people have quietly exceeded theirs.
To provide the best experiences, we use technologies like cookies to store and/or access device information. Not consenting or withdrawing consent, may adversely affect certain features and functions.