Alan Kohler’s recent ABC article, As the price of intelligence collapses, agentic AI is replacing human workers, lands on a confronting but important point: AI is no longer just a productivity tool sitting beside the workforce. It is increasingly being positioned as a form of digital labour within it.
That shift matters.
For business and government leaders, the real question is no longer whether AI should be adopted in the workplace. In many sectors, that decision is already being made by market pressure, cost pressure, public expectation and the pace of technological change. The more important question is how AI can be adopted effectively, responsibly and sustainably.
This is where many organisations will succeed or stumble.
Kohler’s article points to a world in which organisations are using agentic AI to undertake defined tasks that were once carried out by people, from customer interactions to document processing and administrative work. If that trajectory continues, leaders will need to think far beyond tools, licences and pilots. They will need to think carefully about work design, trust, service quality, governance, workforce capability and the human experience of change.
In other words, effective AI adoption is not fundamentally a technology challenge. It is a leadership and implementation challenge.
The first consideration: AI adoption is a work redesign exercise
One of the most common errors in digital transformation is to treat a structural shift as if it were a software deployment. AI adoption invites exactly that mistake.
If leaders define AI as simply another platform rollout, they are likely to focus on procurement, technical integration and training packages. Those things matter, but they are not the heart of the task. The real task is to understand how work is changing.
When AI can draft, analyse, classify, triage, retrieve, summarise, route and respond, leaders need to examine what work is actually being done today, what judgment is required, where human interaction matters, where risk sits and what should genuinely be automated. That means AI adoption should begin with workflow analysis, service design and role clarity, not with enthusiasm over the latest vendor demonstration.
The key leadership question becomes straightforward, even if the answer is not: what work should remain human, what work can be AI-enabled, and what work should be redesigned altogether?
That is not an IT question. It is an organisational design question.
The second consideration: productivity gains are real, but so are implementation risks
Kohler’s article highlights the collapsing cost of machine intelligence. That has major implications for business and government alike. It means AI capability is becoming more accessible, more scalable and harder to ignore. It also means organisations may feel pressure to move quickly in order to remain competitive or financially sustainable.
That pressure is understandable, but speed without discipline creates its own problems.
There is a pattern common to many transformations: leaders become captivated by the promise of efficiency and underestimate the practical complexity of implementation. AI is particularly susceptible to this because its outputs can appear impressive before they are reliable at scale. A polished answer from a model is not the same thing as a robust operating process.
For this reason, leaders should resist two equal and opposite mistakes. The first is paralysis, where governance becomes so heavy that nothing meaningful is ever implemented. The second is overreach, where organisations automate too much, too quickly, with too little thought for controls, consequences or user experience.
The more effective path sits in the middle. It is deliberate, iterative and grounded in real operational outcomes.
The third consideration: trust will determine adoption more than technology capability
AI adoption is often discussed as though the key barrier is skill. In practice, trust is often the bigger barrier.
Employees want to know what AI means for their role, their team and their future. Customers want to know whether service quality will improve or deteriorate. Citizens want confidence that government decisions affecting them remain fair, reviewable and accountable.
If leaders do not address these questions directly, people will fill the vacuum themselves, usually with suspicion. That is hardly surprising. Where the dominant narrative is that AI is replacing jobs, any workplace adoption effort framed purely around productivity is likely to be heard as a threat.
This is where leadership communication matters enormously. Effective adoption requires more than a vision statement about innovation. It requires a credible account of what is changing, why it is changing, what principles are guiding decisions, where humans remain accountable and how the organisation will support its people through the transition.
If employees believe AI is simply a cost-cutting instrument dressed in modern language, engagement will fall, resistance will rise and workarounds will flourish. If they can see that AI is being introduced with care, clarity and an understanding of real work, adoption becomes far more achievable.
Trust, in this context, is not a soft issue. It is an implementation issue.
The fourth consideration: capability building must extend beyond technical literacy
Another implication of Kohler’s article is that the workforce challenge is not limited to specialists. If AI becomes embedded in everyday work, then broad organisational capability becomes essential.
This does not mean every employee needs to become an AI engineer. It means the workforce needs practical fluency. People need to know how to use AI appropriately, how to check outputs, how to exercise judgement, how to identify risk, how to protect sensitive information and how to work in AI-enabled processes without over-relying on the tool.
Leaders often underestimate the managerial capability required here as well. Managers will be the ones expected to oversee AI-enabled work, make decisions about when human intervention is needed, support teams through uncertainty and hold together performance, wellbeing and accountability at the same time. If managers are not equipped, AI adoption will wobble no matter how good the platform appears to be.
Capability therefore needs to be built at several levels: executive understanding, managerial confidence, frontline practical use and specialist governance expertise. Sending everyone to a one-hour webinar and hoping for the best is unlikely to qualify as transformation. It barely qualifies as an afternoon.
The fifth consideration: governance must be practical, not ornamental
Most organisations now understand that AI requires governance. The difficulty is that governance can easily become one of two things: either so vague that it offers no useful direction, or so restrictive that it prevents meaningful adoption.
Effective governance should help an organisation move with confidence, not merely slow it down.
For business leaders, this means being clear on issues such as data use, privacy, quality assurance, accountability, risk thresholds, escalation pathways and vendor management. For government, these same issues apply, but with higher expectations around transparency, procedural fairness, public trust, record keeping and reviewability.
A useful governance model is one that distinguishes between low-risk and high-risk applications. Not every use case needs the same level of scrutiny. Internal drafting assistance is different from decision support affecting payments, entitlements, compliance outcomes or public safety. If leaders fail to differentiate, they either over-control trivial use cases or under-control consequential ones.
What matters most is that governance is translated into workable decisions. Leaders need to know what is permitted, what is restricted, what requires human review and who remains accountable. Principles alone are not enough. Implementation lives in the detail.
The sixth consideration: public and private sector contexts are different, but not separate
Kohler’s article references corporate responses to AI adoption, including workforce reductions and natural attrition. Those examples are important, but government leaders should not read them as relevant only to the private sector.
Government agencies face many of the same pressures: rising demand, constrained resources, workforce shortages, citizen expectations for faster and simpler services, and an increasing need to process information at scale. AI will inevitably be part of that environment.
However, government adoption carries a different burden of legitimacy. A private company may absorb some service friction or internal dissatisfaction in pursuit of efficiency. Government usually cannot do that without wider consequences. Public trust is more fragile, scrutiny is more intense and the impact on citizens can be more serious.
That means government leaders need to take a particularly careful approach. The question is not whether AI can help, because in many cases it can. The question is how to apply it without undermining fairness, accountability and confidence in public institutions.
In practice, this makes government AI adoption as much a policy and service design exercise as a digital one.
The seventh consideration: workforce impact must be addressed honestly
Perhaps the most difficult implication of Kohler’s article is the most obvious one. If agentic AI performs meaningful work, then jobs, roles and career paths will change.
Some organisations will grow without increasing headcount. Some will redesign roles. Some will reduce labour through attrition. Some will make direct cuts. Leaders may prefer softer language, but the workforce implications are real.
The mistake is not in acknowledging this. The mistake is in pretending otherwise.
Responsible leadership means being honest that AI adoption may reduce some forms of work while increasing the value of others. It means thinking about redeployment before redundancy, capability pathways before capability gaps, and role evolution before simple role removal. It also means considering the less visible consequences, such as what happens to entry-level development if routine tasks disappear, or how organisations build future judgment when early-career learning opportunities are stripped out.
These are not abstract questions. They go to the long-term health of the workforce.
Effective adoption requires workforce planning, not just technology planning.
The eighth consideration: adoption success should be measured broadly
Too many AI initiatives are judged on narrow metrics such as speed, usage or cost reduction. Those measures matter, but they are insufficient.
A more mature view of adoption success would ask broader questions. Has service improved? Has quality improved? Have risks increased or decreased? Are employees using the tool well? Do managers trust the outputs appropriately? Has rework fallen? Have customers or citizens experienced the change positively? Are people clearer about accountability or more confused?
This matters because an AI initiative can look efficient on paper while creating hidden cost elsewhere. A shorter handling time may come at the expense of customer frustration. Automated drafting may increase review burden. Faster triage may create inequitable outcomes if edge cases are not handled well.
Leaders need a balanced scorecard for adoption, one that combines productivity, quality, trust, risk and human impact.
What effective adoption looks like in practice
In practical terms, leaders seeking effective AI adoption in the workplace should focus on several essentials.
They should begin with real work, not abstract enthusiasm. That means identifying specific processes or service areas where AI may add value, and understanding them properly before redesign begins.
They should involve the people who do the work. This is not simply a matter of consultation etiquette. Frontline teams usually understand the exceptions, workarounds and judgement points that determine whether a process actually works. Ignore that knowledge and the design will be neat on paper and messy in reality.
They should define where human judgment remains essential. AI can support and scale work, but accountability should not become a foggy cloud drifting somewhere between the vendor, the platform and a tired team leader.
They should communicate clearly and repeatedly. People need more than a launch message. They need context, boundaries, examples and confidence that leaders are dealing honestly with implications.
They should invest in capability and support. Adoption is not achieved when the tool goes live. It is achieved when people can use it well, understand its limits and adapt their work with confidence.
And they should treat AI adoption as ongoing change, not a one-off implementation. The technology will continue to evolve, as will public expectations, regulatory approaches and organisational learning. Leaders should plan for adaptation, not just deployment.
A final thought for leaders
Kohler’s article is valuable not because it predicts every outcome with certainty, but because it highlights the scale of the transition now underway. The workplace is changing as intelligence itself becomes cheaper, more available and easier to embed into routine operations.
For leaders, this does not remove the need for judgement. It increases it.
The challenge is not to be for or against AI. That debate is already becoming too blunt to be useful. The challenge is to adopt AI in a way that is purposeful, disciplined and humanly credible. Done well, AI can improve productivity, reduce friction and support better service. Done poorly, it can erode trust, weaken capability and create damage that takes years to unwind.
Effective adoption, then, is not about how quickly an organisation can install AI. It is about how thoughtfully it can redesign work around it.
That is where leadership matters most.



