Search Comment Central
CC AI photo

Adoption fails when AI doesn’t keep humans in the loop

John Margerison
March 2, 2026

Everyone besides a few hardline sceptics sees the potential in AI to transform workplaces, workflows, and just about everything else.

Yet there’s a thick castle wall in the way of many enterprises reaching that potential: employee adoption. Employees see the value – but they see it being somewhere else. Far away from them and their work.

So, how can enterprises pass the wall? Unfortunately, many enterprises have resorted to a brute force approach.

Accenture is a recent case in point. It has begun monitoring how much and frequently staff use AI tools, and will be factoring this into top-level promotion decisions. They’re struggling to increase AI usage – so they’re forcing it through.

They understood that low adoption is a psychological issue, but they misunderstood what kind. They’ve assumed it’s a lack of incentive. That senior consultants like what they already know, and don’t feel the need or urgency to change it.

There is undoubtedly some truth in this. But I believe the biggest issue at play here is a lack of trust in the technology to deliver high-quality outputs. Over half of people are wary about trusting AI systems, according to a KPMG study, and 50 per cent of US employees asked by McKinsey reported inaccuracy as a risk posed by AI in the workplace.

Senior consultants at Accenture will have long-established and deeply entrenched working processes built on years, even decades, of experience. They’ll have high standards and reputations to maintain. Even pride in their work. And right now, they don’t trust that AI can do elements of their job to the same high quality.

The solution to this must not be to break down the wall with force, but to open the doors with a trust-building treaty. So, how do enterprises build trust and comfort with AI? How do they prove its value in a way that invites adoption, rather than forcing compliance and eroding trust?

My thesis is that the only way to do this is to build human quality checks into critical stages of the AI’s workflow. This gets to an issue at the very heart of how almost all AI currently operates.

The temptation with AI from the start has been to automate as much as possible as quickly as possible. To jump from zero to 100, overnight.

But getting employees to go along with this only works where AI is working on low-value, low-priority tasks.

To test this, think about how you use AI outside of work. If you’re using it to recommend some weekend activities, you probably want the whole thing generated in one go – quick and easy. But if you’re using AI to book your summer holiday, you’ll want it to run things by you, to make sure you’re happy with the location, the cost, and that the flight timings align with your schedule – the stuff only you’re confident you can answer.

The same thinking is true when AI meets the workplace. In fact, it’s especially true in this case, given that a bungled job puts reputation, pride, and, yes, promotions at risk. Bringing human checks into the AI workflow is essential to building up trust and busting this adoption issue.

If enterprises are struggling to increase AI adoption and deployment, they need to pause. Instead of assuming the psychological issue lies with the employees who just need to grow up and AI-up, they should assume that the psychological issue actually lies at the roots with the technology itself.

The technology challenge is probably not involving your human employees as much as AI Quote

The technology challenge is probably not involving your human employees as much as AI needs to to gain their trust. Control needs to be clawed back and into the hands of your employees, who are, after all, the experts.

This is what I call human-in-the-loop. I believe it is the critical first step in building trust-first AI systems that humans will voluntarily use into every enterprise.

Think about it this way: would you outsource large, extended, and high-value tasks to a new hire that you’re not yet familiar with? No, you’d want to check their work at regular stages to control quality and provide direction. The same must apply to AI.

Importantly, this doesn’t reduce AI’s impact – it increases it. It might slow down the time it takes to complete a given task relative to having AI run the task end-to-end. But the output from the combined human-AI process is likely to be of much higher quality.

We’re seeing a pandemic of “AI slop” infect workplaces right now, precisely because too much responsibility is being handed to AI with too little in the way of human checks.

The same KPMG study found 66 per cent of people at work rely on AI output without evaluating accuracy, and 56 per cent are making mistakes in their work due to AI. Workday calculates that 37 per cent of the time employees save using AI tools was lost to reworking low-quality outputs.

This further breaks down trust in AI from employees and clients – and creates a drag on employees’ time.

So, not only can keeping humans in the loop build trust with employees, but it also ensures work remains high quality, saving time and protecting trust with clients too.

My message to enterprise decision-makers: stop blaming your employees for not adopting AI fast or deeply enough. Instead, provide them with AI tools that they can trust to support them in continuing to deliver the quality of work they’re proud of.

John Margerison

John Margerison is an international entrepreneur focused on building trust-first, human-in-the-loop AI systems for enterprise.

He is the CEO and founder of XFactorAi, the AI communications platform pioneering AI systems for enterprises and government environments. XFactorAi’s proprietary technology powers WorkPilot, which enables large enterprises to embed AI into their communications to streamline workflows, ensure compliance, and drive results.

Alongside XFactorAi, John maintains an investment portfolio through his family office.
John writes a regular newsletter about building trust-first AI for enterprises.

Border
Most Popular
Shutterstock 2444742271
Reform overtook the Conservatives for...
Prof Paul Baines
Professor Paul Baines
February 6, 2026
Shutterstock 2598353157
The high seas underpin carbon...
Michael Burgass
Michael Burgass
February 3, 2026
Shutterstock 2577154873
Every government has made U-turns,...
Portrait 2025 11 18 164446 grkt
Peter Bedford MP
February 10, 2026
What to read next
Shutterstock 2656078301
The UK government has announced plans to roll out a nationwide...
Michael Marcotte
Michael Marcotte
November 3, 2025
Shutterstock 2502179335
In today’s world, people expect services to be tailored to their...
Photo Tony Mercer 1
Tony Mercer
June 26, 2025
2048px thumbnail
Just over two months ago at University College London, Sir Keir...
DF
Daniel Fessahaye
March 20, 2025