ASPOREA® Make AI and Digital Transformation Actually Stick

From AI Plan to Practice: What the APS Must Do Next

Australia’s APS AI Plan 2025 is unusually clear on deadlines: foundational AI training at scale, Chief AI Officers across agencies, and a secure APS-wide chatbot moving from pilot to desktop. It’s a serious attempt to embed AI into the everyday machinery of government, with capability, governance, and enabling infrastructure all in scope.

The next challenge is less about announcing initiatives and more about making adoption safe, consistent, and repeatable across very different agencies and operating environments. In our work with public sector change, the difference between momentum and mishap is nearly always found in the same place: the messy middle where policy intent meets day-to-day delivery.

What the APS should do next

Build role-based capability, not just training completion.

Foundational training lifts general literacy, but it does not create the skills needed to deploy and govern AI in real work. Different roles need different capability uplift: leaders require decision frameworks for value and risk, policy and programme teams need practical guidance for safe drafting and analysis, regulatory teams need confidence in assurance and contestability, and corporate services need procurement and records practices that stand up when AI is involved. If the APS stops at “everyone completed the course”, it risks uneven adoption and a surge in shadow AI as teams try to get work done outside approved pathways.

Turn AI governance into a usable workflow.

Most agencies can write a policy. Fewer can run a repeatable approval process that is proportionate, fast enough to support delivery, and strong enough to withstand scrutiny. The practical next step is to translate governance expectations into an operating rhythm: simple risk tiering for use cases, minimum evidence packs that teams can complete without weeks of rework, clear decision rights and sign-off points, and a regular review cadence once tools are live. Without this, governance becomes either inconsistent or so heavy that it is bypassed, and neither outcome builds trust.

Treat the chatbot as a change program, not a software rollout.

A desktop assistant changes how work is drafted, reviewed, delegated, and quality assured. If it is introduced as “here’s the tool, good luck”, agencies should expect avoidable incidents, inconsistent standards, and frustration from both staff and leaders. Managed well, the chatbot becomes a catalyst for better ways of working: clear expectations for review and verification, team norms for appropriate use, practical records guidance for AI-assisted work products, and manager toolkits that help teams adopt AI without losing judgement, accountability, or auditability.

Standardise assurance expectations for vendors and embedded AI.

Supplier disclosure of embedded AI is a sensible starting point, but it does not automatically produce consistent assurance. Agencies need a common view of what “acceptable evidence” looks like for privacy, security, bias and performance testing, monitoring after deployment, and audit trails that remain defensible over time. This matters even more where tools are shared across the APS, because inconsistency in assurance quickly turns into inconsistency in risk appetite and decision-making.

Raise the bar where rights, entitlements, and enforcement are in play.

In regulatory and decision-making contexts, the risk profile is fundamentally different. Systems used in compliance monitoring, investigations, licensing, or decisions that affect individuals must be contestable and traceable, with strong documentation and oversight. Here, the focus needs to be on defensibility: clarity on human oversight, documentation that supports assurance reviews, and active monitoring for drift, bias, and data exposure risks.

How Asporea can help

The biggest gap is not the technology itself. It is the capability, governance workflow, and change execution needed to use AI safely at scale. Asporea supports agencies to operationalise AI adoption through practical deliverables such as role-based capability pathways, governance-to-workflow templates (risk tiering, evidence packs, decision trees), operating rhythms for reporting and benefits tracking, and structured workforce consultation that maintains trust as work practices evolve.

The APS has set the direction and the deadlines. The next win is making AI adoption workable in the real world: fast enough to deliver value, strong enough to manage risk, and consistent enough to earn trust across government.

Share:

More Posts

Let’s Make Your Transformation Work in Practice

If you are planning or delivering an AI, digital or major reform initiative, early adoption planning significantly improves outcomes and reduces delivery risk.