researchTransformationManaging in a world saturated with AI

Over the past few years, organizations have focused on identifying high-value use cases for AI: enhancing marketing performance, improving financial analysis, accelerating software development, and more. These discussions largely centered on a familiar dilemma: how to balance automation with human augmentation.

In 2025, however, the conversation has shifted. With unprecedented adoption levels and dramatically lower AI costs, we now operate in environments where AI is used everywhere, by everyone. This ubiquity reshapes work in ways that are often counterintuitive: job applicants and recruiters both rely on AI, teams collaborate through AI-mediated workflows, and leaders must maintain legitimacy when algorithms increasingly inform decisions.

Drawing on emerging research, this post examines how AI saturation transforms organizational dynamics and challenges existing management models. We conclude with five strategic questions that every leader should consider as they prepare their organizations for the next decade of AI-enabled work.

The shift

Two phenomenon accelerated and reinforced one another: adoption rate and technology cost reduction. Consultants reports pilled recently highlighting how big the AI adoption is. McKinsey estimated at almost 90% the share of companies using AI in at least one business function. And Bain reported AI is a top priority for 74% of the executives they surveyed.

This adoption is accelerated by an everyday cheaper access to the technology. For example, the cost-per-task for frontier AI models has dropped 300-fold year-on-year. It’s true that the higher accessibility of the technology acts as a reinforcing loop for adoption. The technology is performing better, so it’s more used, hence learning curve effects and economies of scale make it cheaper and then more people are using the technology.

The main implication of this recent acceleration is that the technology is now everywhere. The question is no longer “How can AI do X?” but rather “How do we operate when AI does X by default—and so does everyone else?”

Interesting insights from recent research

A wave of new research challenges our assumptions about how AI transforms work. Here are three surprising findings:

1. Recruitment is broken

Recruitment is one of the first domains where AI saturation reveals its second-order effects. Candidates now routinely use generative models to draft CVs, refine cover letters, and rehearse interview answers. Recruiters, meanwhile, deploy similar systems to filter applications, assess tone, and detect patterns at scale. On both sides, the technology optimizes for what the other side is assumed to value. The result is not better matching, but signal erosion. When everyone writes with the same statistical sense of “good,” writing quality ceases to be informative. Recent research makes this distortion visible: with the widespread use of generative AI in written applications, candidates in the highest ability quintile are hired 19% less often than before large language models became ubiquitous. Lower-ability applicants are not cheating the system; they are simply presenting closer to the mean. This is not a moral problem, but a structural one.

When optimization becomes symmetric, traditional indicators of potential collapse. In a world where AI equalizes surface competence, organizations can no longer rely on polish, fluency, or preparation as proxies for talent. The managerial challenge is therefore not to detect AI usage, but to redefine what counts as signal in the first place and to design selection processes that surface qualities AI cannot so easily smooth away.

2. Automation doesn’t always reduce workloads

In 2016, Geoffrey Hinton famously suggested that radiology was on the verge of obsolescence. Image recognition was improving so fast, he argued, that training new radiologists made little sense. Technically, he wasn’t wrong: today’s AI systems often outperform humans at identifying anomalies in medical images. And yet, nearly a decade later, the empirical outcome points in the opposite direction. In the UK, the number of radiologists employed by the NHS has increased by more than 40 percent since that prediction. AI did not arrive as a substitute that cleanly removed human labor from the system. It arrived as a complement that reshaped the work around it—and, in doing so, expanded it.

What explains this paradox is that automation changes systems, not just tasks. AI tools in radiology are used by radiologists, not instead of them, and they bring with them new forms of work: validating model outputs, monitoring performance over time, handling edge cases, and assuming responsibility for decisions made with algorithmic assistance. At the same time, efficiency triggers a classic rebound effect. If AI makes MRI scans faster and cheaper, the system responds by producing more scans, not fewer reports. Add an ageing population and a structurally rising demand for medical imaging, and the result is workload expansion rather than contraction. Unsurprisingly, surveys show that only a small minority of clinical directors report workload reductions, while a far larger share report the opposite. The broader lesson extends well beyond healthcare: when organizations assess automation purely through a cost-reduction lens, they systematically underestimate the ways in which efficiency creates new demand, new responsibilities, and new forms of coordination. Many AI projects disappoint not because the models underperform, but because the organization misunderstood what “automation” would actually automate.

3. Sometimes, full automation outperforms human-AI collaboration

A recent study on visual generative AI in advertising adds an uncomfortable wrinkle to the standard narrative of human–AI collaboration. The researchers compared three approaches: ads created by human experts, ads created by humans and then “improved” with generative AI, and ads generated entirely by visual AI systems. The intuitive expectation is that augmentation should dominate—that combining human judgment with machine capability would yield the best results. The data says otherwise. Fully AI-generated ads consistently outperform both human-only and human-plus-AI variants, while AI-modified ads show no statistically significant improvement over human benchmarks. In other words, adding AI to human creative work doesn’t help—and in some cases, it actively hurts. This echoes findings from other domains, such as medical diagnosis, where hybrid human–AI decision-making underperforms the best standalone performer. The mistake is assuming that “human in the loop” is always a virtue; in some settings, it is a source of noise.

What makes this result more unsettling is what happens when audiences are told the truth. Disclosing that an ad was generated by AI reduces its effectiveness by up to 31.5 percent. The same output performs materially worse once its origin is revealed. This suggests that part of AI’s advantage lies not just in pattern optimization, but in the absence of interpretive framing: viewers respond to the stimulus, not to its authorship. From a managerial perspective, this creates a stark tension. If full automation performs best, and disclosure undermines performance, then the performance gap becomes, in effect, the cost of transparency. The broader implication is not that ethics should be abandoned, but that the familiar playbook of augmentation and disclosure may be poorly suited to environments where AI already exceeds human performance. In such cases, insisting on collaboration or visibility does not humanize the system, it degrades it. And that raises a harder question for organizations: when AI works better alone, what exactly are we preserving by keeping humans visibly in the process?

A new managerial agenda

The age of ubiquitous AI demands more than governance frameworks. It demands a rethinking of how work, authority, and meaning are structured. Leaders must ask new questions:

  1. Who owns a task when it passes fluidly between AI and humans? AI drafts. A human revises. Another AI summarizes. Who’s responsible?

  2. What motivates people when visibility, mastery, or authorship is blurred? Signing off on an AI-generated output may be efficient—but does it feel meaningful?

  3. What makes leadership credible when AI knows more, sees further, and reacts faster? When managers can’t rely on informational superiority, legitimacy must come from somewhere else.

  4. How do we preserve trust and connection when most communication is AI-mediated? When the “voice” of the team is synthetic, how do we maintain real relationships?

  5. What does decision-making mean when AI anticipates every option before we’ve finished reading? Judgment, not just knowledge, becomes the scarce and strategic resource.

Redesigning the human in the loop

AI saturation does not eliminate the need for leadership; it exposes what leadership was never really about. When intelligence becomes cheap, fast, and ambient, managers can no longer anchor their authority in superior information or analytical reach. AI must be understood not as a support function operating in the background, but as an actor that reshapes how work unfolds and how decisions are made. The task, then, is not to keep humans “in the loop” as a symbolic gesture of control, but to place them deliberately where they remain irreplaceable: in making sense of ambiguity, creating meaning, setting direction, and taking responsibility when outcomes are uncertain.

The central challenge is therefore not how to govern AI systems, but how to lead organizations whose logic is increasingly shaped by them. AI forces management to confront what is distinctly human, not to defend it nostalgically, but to redesign it intentionally for a world where intelligence is no longer scarce.

Photo de note thanun sur Unsplash

Connect with me

Subscribe to our newsletter today to receive updates on the latest news, releases and special offers. We respect your privacy. Your information is safe.

©2020 Louis-David Benyayer. All rights reserved | Terms of Service | Privacy Policy

Subscribe to the newsletter