TL;DR
- The most interesting AI-era work problem is not that people suddenly have less to do. It is that once friction falls, expectations, scope, and cognitive load expand to fill the gap.
- Steve Yegge’s “The AI Vampire,” Theunis De Klerk’s “AI + ADHD,” and Aruna Ranganathan and Xingqi Maggie Ye’s “AI Doesn’t Reduce Work—It Intensifies It” are all describing different layers of the same phenomenon.
- The pattern is counterintuitive but increasingly visible: AI often removes execution friction faster than organizations remove expectations.
- The result is a new class of professional strain: possibility overload, background anxiety from too many open loops, verification fatigue, weaker work boundaries, and rising pressure to turn capability into constant output.
AI was supposed to give professionals back time.
That is still the sales pitch. Draft faster. Summarize faster. Code faster. Analyze faster. Ship faster.
But a different story is now emerging from early adopters, researchers, and workplace studies: in many cases, AI does not simply reduce work. It changes the shape of work in ways that make people feel busier, more cognitively loaded, and more permanently “on.”
That is what makes the recent wave of writing on this topic so interesting. These are not standard anti-AI arguments. They are arguments about a modern human problem: when the cost of starting drops, the number of viable starts multiplies. When output accelerates, expectations reset. When drafting becomes cheap, judgment becomes the real bottleneck.
This synthesis uses sources current through April 14, 2026.
The Structure of the Problem
Taken together, the three source articles suggest a clear internal arc for the argument in this essay.
The problem begins at the level of the individual mind, where AI turns possibility into pressure and leaves more loops open than people can comfortably close. It then expands into the culture of work, where accelerated capability becomes a fight over expectations, ambition, and value capture. Finally, it shows up at the organizational level, where pace rises, scope widens, and work fills more hours of the day.
That movement, from cognition to incentives to structure, is the backbone of the article that follows.
What Each Article Is Really Saying
1. AI + ADHD: when possibility becomes pressure
Theunis De Klerk’s essay is the most psychologically precise of the three.
His argument is not simply that AI helps people with ADHD. In fact, he agrees that it often does. AI matches many ADHD strengths unusually well: nonlinear learning, fast prototyping, conversational exploration, immediate gap-filling, and low-friction experimentation.
The deeper argument is that this benefit creates a second-order problem.
When friction drops, the number of viable projects rises. That sounds liberating, but it also means more partially-open loops, more unfinished paths, and more latent obligations competing for attention. The stress no longer comes from “I cannot do this.” It comes from “I could do all of this, so why is it still unfinished?”
His proposed concept, meta-completion, is useful well beyond ADHD. It names a skill many professionals now need: deciding in advance what “finished” means, what not to pursue, and which projects should be allowed to die. In AI-rich environments, closure no longer happens naturally. It has to be designed.
The broader insight is powerful: AI does not only increase execution capacity. It increases the number of mentally active possibilities.
2. The AI Vampire: productivity gains can become extraction
Steve Yegge’s “The AI Vampire” is more cultural and polemical, but it lands on a serious point.
His thesis is that once AI creates major productivity gains, a fight begins over who captures that value. If workers use AI to produce dramatically more in the same number of hours, companies are tempted to absorb the gain as more output. If workers try to keep all the benefit as less work, competitive pressure pushes back the other way. The tension creates a dangerous middle zone where both companies and employees are pulled toward unsustainable acceleration.
Yegge also adds something important that more formal workplace research often misses: AI can feel physically and psychologically draining even when it feels exhilarating. He describes the mix as dopamine, adrenaline, addiction, fatigue, and rising standards all at once. In his framing, the problem is not only that managers may demand more. It is also that ambitious professionals can internalize the new pace and overextract from themselves.
His metaphor of the “vampire” is memorable because it captures a modern professional fear: the tool that makes a worker feel more powerful can also make it easier for organizations, markets, and ambition itself to take more out of that worker.
3. AI Doesn’t Reduce Work—It Intensifies It: speed expands the work container
The Harvard Business Review piece by Aruna Ranganathan and Xingqi Maggie Ye is the most research-oriented articulation of the same paradox.
Because the full HBR article is subscriber-only, the safest summary comes from combining the HBR preview with UC Berkeley Haas’s follow-up interview with the authors.
Their key finding is straightforward: AI did not simply help workers finish the same jobs faster. It widened what they attempted, pulled work into more hours of the day, and increased the number of parallel threads they kept alive.
In their fieldwork, intensification showed up in three concrete ways:
- people took on work that previously belonged to someone else, or that would never have been attempted
- work leaked into time that used to function as a pause
- workers kept multiple AI-assisted threads running at once, sometimes with several agents in parallel
That is the most important point in the whole discussion. AI does not only compress tasks. It expands the container around tasks. It changes what counts as reasonable output, what counts as someone’s job, and what counts as an acceptable stopping point.
The Shared Thesis Beneath All Three
These three articles are not making separate arguments.
They are describing different faces of the same structural shift:
- AI reduces execution friction
- reduced friction increases the amount of work that feels possible
- increased possibility creates more open loops, more self-directed ambition, and more managerial expectation
- the new bottleneck becomes attention, judgment, closure, and recovery
That is why the new AI-era professional problem feels so strange.
The tool is genuinely helpful. The output is often genuinely better. The speed gains are often real. And yet many people feel more stretched rather than less.
The contradiction disappears once the underlying mechanism comes into view:
AI is not just a labor-saving tool. It is a scope-expanding tool.
More Modern AI-Era Issues Professionals Are Facing Now
The three essays point to a broader cluster of problems that are now showing up in research and workplace reporting.
1. Boundary collapse and the infinite workday
When starting a task becomes as easy as opening a prompt, the workday loses some of its natural stopping points.
That shows up clearly in the Berkeley Haas reporting: workers prompted AI at lunch, before meetings, in the evening, and during moments that previously acted as breaks. The result was not always a dramatic overtime event. It was a slow erosion of boundaries.
This is consistent with Microsoft’s recent framing of the “infinite workday”: work is no longer contained by a neat sequence of meetings, focused output, and an end-of-day stop. AI makes it easier to keep momentum going, but that can quietly turn every spare moment into usable work surface area.
The modern issue here is not simply “more hours.” It is the disappearance of friction that used to protect recovery.
2. Verification tax and cognitive supervision load
A common fantasy about AI is that it removes thinking. In practice, it often moves thinking upstream and downstream.
Microsoft Research found that knowledge workers described a shift in critical thinking toward goal-setting, prompt refinement, verification, integration, and task stewardship. Higher confidence in AI was associated with less critical thinking, while higher self-confidence in one’s own expertise was associated with more critical thinking effort.
Anthropic’s internal research found something similar. Even in a company full of sophisticated AI users, most employees reported they could fully delegate only a small fraction of their work. AI was a constant collaborator, not a fully trusted substitute. People still had to supervise, validate, and debug.
This creates a new professional problem: verification tax.
A professional may draft faster, but the work still shifts toward inspecting, comparing, correcting, and stitching together more material than before. It becomes less about producing the first version and more about guarding against subtle wrongness.
3. AI brain fry and decision fatigue
In March 2026, BCG summarized findings from a study of 1,488 US workers across large companies and described a pattern they called “AI brain fry.” Their claim is that excessive use of AI or constant oversight of AI systems can produce mental fatigue, more errors, decision overload, and even stronger intent to quit.
That fits what early adopters are already reporting informally: the work can feel thrilling in the moment but cognitively expensive in aggregate.
This matters because AI fatigue is not identical to old-school overwork. Traditional overload often came from volume. AI overload can come from:
- too many concurrent threads
- too many plausible directions
- too much low-grade evaluation
- too many micro-decisions about whether to trust, revise, discard, or escalate
In other words, the exhaustion is often executive rather than purely mechanical.
4. Skill broadening paired with skill atrophy
One of the most exciting AI outcomes is breadth. People can do work that used to sit outside their normal expertise. Anthropic found that employees were becoming more “full-stack,” tackling front-end work, data tasks, and adjacent domains that they would previously have avoided.
But the same source also reports the darker side: concerns about skill atrophy, weaker underlying understanding, and less manual problem-solving practice. Some engineers worried that by relying on AI to jump straight to answers, they were skipping the deeper learning that comes from working through a problem the hard way.
That creates a paradox:
- AI makes professionals more capable across a wider surface area
- AI can also weaken the depth required to supervise that wider surface area well
This is especially risky because supervision itself depends on judgment. If the craft weakens too far, the ability to safely use the tool weakens with it.
5. Mentorship and social learning erosion
Anthropic’s research also points to a quieter organizational problem: AI becomes the first stop for questions that used to go to colleagues. Some employees reported fewer mentorship moments and less collaboration as a result.
This matters more than it first appears.
A lot of professional growth does not happen through formal training. It happens through interruptions, questions, shadowing, correction, and exposure to how more experienced people think. If AI absorbs too much of that early-stage interaction, organizations may gain speed in the short term while weakening the social infrastructure that develops judgment over time.
This is one reason the AI transition may feel very different for senior and junior professionals. Experts can often use AI as an accelerator. Less-experienced workers may use it as a substitute before they have built enough internal models to judge it well.
6. Uneven AI readiness and a new inequality inside organizations
Another emerging issue is that many workers are being asked to adapt faster than institutions are helping them adapt.
A 2025 Salesforce and Morning Consult survey of more than 14,000 adults across 13 countries found strong worker demand for AI training, but much lower confidence that employers or governments are investing enough in readiness. Only 29% of workers globally said their workplace invests enough in AI training, and fewer than half of employed adults said their workplace was prepared to use AI tools in daily work.
That means the AI transition is not landing on a level playing field.
Professionals with good tools, strong prompting habits, supportive teams, and room to experiment will compound faster. Others will face the pressure of AI expectations without the training, norms, or managerial support needed to use the tools well. The result is not only skill inequality. It is stress inequality.
What Professionals Should Take Seriously
The central lesson from all of this is simple:
The human bottleneck in AI work is no longer raw production. It is attention management, judgment quality, closure, and recovery.
That has several implications.
First, professionals need better stopping rules. Theunis De Klerk’s idea of meta-completion is useful because it names the missing discipline in low-friction environments. If the cost of continuing is always low, then intentional stopping becomes essential.
Second, organizations need to resist confusing expanded capability with sustainable baseline capacity. The Berkeley Haas work is especially important here. What feels like healthy momentum in the short run can become a permanently denser workday if nobody protects boundaries.
Third, teams need to treat AI oversight as real work. Verification, integration, taste, judgment, and counterargument are not overhead to be ignored. They are now core professional functions.
Finally, the healthiest question is no longer “How much faster can we go with AI?”
It is:
What work should expand, what work should stop, and what human limits are we trying not to break in the process?
That is the real future-of-work question.
Sources
- The AI Vampire — Steve Yegge, Medium, February 11, 2026
- AI + ADHD. The Cost of Infinite Possibility — Theunis De Klerk, Medium, February 9, 2026
- AI Doesn’t Reduce Work—It Intensifies It — Aruna Ranganathan and Xingqi Maggie Ye, Harvard Business Review, February 9, 2026
- AI promised to free up workers’ time. UC Berkeley Haas researchers found the opposite. — UC Berkeley Haas, February 18, 2026
- How AI is transforming work at Anthropic — Anthropic, December 2, 2025
- The Future of AI in Knowledge Work: Tools for Thought at CHI 2025 — Microsoft Research, April 15, 2025
- When Using AI Leads to “Brain Fry” — Boston Consulting Group, March 5, 2026
- How Microsoft 365 Copilot and agents help tackle the infinite workday — Microsoft, June 26, 2025
- Growing Worker Demand for AI Skills Creates Opportunity for Institutions — Salesforce / Morning Consult, September 18, 2025
- From Junior to Senior: Allocating Agency and Navigating Professional Growth in Agentic AI-Mediated Software Engineering — Dana Feng, Bhada Yun, and April Yi Wang, arXiv, February 2026