ASPOREA® Make AI and Digital Transformation Actually Stick

AI Training KPMG Cheating Scandal

According to a recent article in the Financial Times, A KPMG partner reportedly used an AI tool to pass an internal training course about using AI, and was fined for it. On the surface it looks like a tidy story about misconduct: someone broke a rule, the firm enforced consequences, end of tale.

But the more revealing story sits underneath. This is what happens when organisations try to teach and test a new capability using methods that were designed for a pre AI world. The incident is less a morality play than a stress test, and it exposed something many workplaces have been quietly relying on for years: the assumption that “a test” equals knowledge transfer.

That assumption is not merely questionable now. In many settings, it is plainly false.

The Knowledge Transfer Fallacy in Workplace Training

For a long time, workplace learning has leaned on a simple pipeline model. People complete training modules, they pass a quiz, and leaders conclude capability has improved. It is appealing because it is measurable. Completion rates look neat on a dashboard. Pass marks offer certainty. Compliance teams can point to a tidy audit trail.

Yet most of these tests do not measure knowledge as a practitioner would recognise it. They measure recall, recognition, and the ability to repeat phrasing. They reward familiarity with what the organisation wants you to say rather than competence in what the organisation wants you to do. Even before AI, that was a weak proxy for real capability. With AI, it becomes almost meaningless, because the machine can perform recall and rephrasing better than most humans and with a lot less coffee.

Why AI Makes Memory Based Testing Obsolete

AI has changed the economics of memory. If an assessment can be passed by locating information and producing a plausible answer, then a tool that excels at locating information and producing plausible answers will inevitably be used. Not always out of malice. Often out of convenience. Sometimes out of misunderstanding. Occasionally out of sheer exhaustion. The point is that the assessment itself is no longer a reliable signal of capability. When the test is easy for AI, the test is not testing the human.

That is not a reason to shrug at policy breaches. It is a reason to interrogate the design of what we are trying to achieve through training in the first place. If the objective is to build safe, effective use of AI, then the training approach needs to reflect the reality of how AI will actually be used at work.

What Competence Looks Like in an AI Enabled Workplace

The deeper issue is that many organisations, not just KPMG, still treat training as knowledge transfer and testing as proof. But modern work requires something different. The capability we want is judgement: knowing how to frame a task, how to ask the right questions, how to spot when something feels off, how to verify, how to manage risk, and how to be transparent about what tools were used and why.

Those are not memory skills. They are professional skills. And they are the very skills that shallow quizzes tend to ignore because they are harder to assess.

Make AI Use Mandatory and Assess the Human Work

Trying to keep AI out of learning and assessment is like trying to keep calculators out of accounting. You might win a few skirmishes, but you will lose the war, and you will end up measuring the wrong thing while you are at it.

A more honest training design is one that expects AI to be used, then assesses what the human does with it. Make AI part of the exercise. Require learners to use it in approved ways, then require them to demonstrate judgement: how they prompted, how they checked, what they changed, what they rejected, what risks they considered, what they recorded.

In that model, AI is not a shortcut. It is a tool, and the learner is being marked on professional practice.

Measure Output Quality Instead of Training Completion

Most organisations track training completion and call it a day. But completion tells you almost nothing about whether capability improved. If you want evidence that training worked, you need to look at outputs over time: fewer errors, less rework, clearer reasoning, better consistency, stronger documentation. You do not need perfect metrics, but you do need metrics that reflect reality rather than theatre.

The Practical Lesson Organisations Should Take

The KPMG story is uncomfortable because it punctures the illusion that we can keep our old learning models and simply add a new policy paragraph called “don’t use AI”. People will use AI because it is useful, because it is fast, and because it is increasingly normal. If the only thing standing between intended behaviour and actual behaviour is a rule and a multiple choice quiz, the rule will be tested, and the quiz will be defeated.

The practical response is not to fight the presence of AI. It is to modernise what we mean by competence. Test application, not memory. Test judgement, not definitions. Make the learning experience match the work environment. And measure what matters: quality, verification, transparency, and responsible outcomes.

If we do that, the next time an organisation discovers “AI was used on the AI test”, the right reaction will not be shock. It will be a calm realisation that the assessment was asking the wrong question.

And that, in a strange way, might be the most valuable lesson AI is teaching us.

Share:

More Posts

Let’s Make Your Transformation Work in Practice

If you are planning or delivering an AI, digital or major reform initiative, early adoption planning significantly improves outcomes and reduces delivery risk.