Insights
AI Literacy for Clients: The New Core Competency
- Date 20 Mar 2026
- Filed under Insights
Public Sector AI Governance Series
Stage 2: Building Client Capability
In the second part of our series, we explore how public-sector leaders can no longer rely solely on intuition; AI literacy is now the foundational competence required to evaluate modern consulting advice.
Generative AI now shapes a significant proportion of the analysis, drafting and synthesis delivered by consulting firms. Policy options papers, stakeholder summaries, risk assessments, transformation plans, and even executive briefings frequently contain AI-assisted content, sometimes openly disclosed, often not.
A consultant may deliver a polished options paper, and two senior reviewers may miss that a key risk statement was an AI-generated hallucination. This is precisely where literacy matters.
While supplier maturity varies widely, the direction of travel is consistent across the sector. This shift has created a new challenge for the public sector:
Agencies must evaluate AI-enabled advice before they have an AI-enabled capability.
Executives receive polished reports, while procurement teams evaluate bids and contract managers approve milestones, all without a clear view of where AI was used, what it changed, or what risks it may have introduced.
The APS AI Plan 2025 makes it clear that agencies will be accountable for how AI is used, not just internally, but across their vendor and consulting ecosystems. That accountability depends on one thing:
AI literacy, practical, non-technical capability enabling public servants to question, interpret and evaluate AI-shaped work.
This article explains what AI literacy really means for the public sector, why it is now a core competency, and how agencies can build it quickly and confidently.
What “AI literacy” really means for government
AI literacy is not technical. It does not require coding or model expertise.
Public-sector AI literacy is the ability to:
- understand what AI can and cannot do
- assess risks and limitations
- identify signs of AI-generated content
- ask suppliers the right questions
- evaluate whether a recommendation can be trusted
- ensure data is protected
- make informed decisions about AI-shaped analysis
AI literacy is not a single skill. It spans six practical domains:
1. Conceptual Literacy
Understanding what generative AI is, and what it isn’t. Recognising that large language models are probabilistic pattern machines, not reasoning engines.
2. Risk Literacy
This means knowing the risks: hallucinations, training leakage, cloud retention, cross-border processing, privacy and PSPF concerns.
3. Delivery Literacy.
Knowing which tasks AI accelerates (summaries, drafts, mapping) and which require humans (judgement, nuance, risk interpretation).
4. Evaluation Literacy
Recognising when a draft or set of recommendations shows signs of AI error or superficiality.
5. Procurement Literacy
Evaluating AI-enabled bids using tools like AICR, SAFE-AI and the Disclosure Spectrum.
6. Executive Literacy
SES leaders understand how AI changes advice and where human judgement remains essential.
AI literacy is the ability to interrogate advice confidently, not technically.
Why AI literacy is now a core public-sector competency
AI literacy has moved from “nice to have” to “necessary for accountability.”
Three significant forces are driving this.
1. AI has reframed the value chain of consulting advice
Much of the labour previously required to produce consulting deliverables, synthesis, drafting, and summarisation is now accelerated by AI. The apparent polish of a document no longer signals its quality.
AI literacy allows public servants to ask:
- Is this insight actually meaningful?
- Is this option grounded in evidence?
- Has AI missed nuance or context?
Without literacy, clients cannot tell the difference between speed and substance.
2. APS policy is raising expectations on government as a client
The APS AI Plan 2025 emphasises:
- explainability
- supplier accountability
- human oversight
- risk management
- third-party AI governance
Agencies cannot satisfy these obligations if leaders cannot read AI-shaped work critically.
3. Literacy is now a risk-control requirement
You cannot govern what you cannot recognise. AI literacy is essential to detecting:
- missing evidence
- unsupported claims
- fabricated references
- overconfident conclusions
- gaps caused by cut-down synthesis
And critically:
- whether a recommendation is safe to act on
This is no longer optional; it is required for informed, accountable decision-making.
The five competencies of an AI-literate public-sector client
This is the article’s practical core. The most AI-literate public servants are not the most technical; they are the most curious, structured and confident in asking questions.
Here are the five competencies agencies need now.
Competency 1 — Asking the right questions
Plain-language questions reveal more than any tender document:
- Where exactly was AI used?
- What tools were involved?
- What data was uploaded or processed?
- What safeguards did you apply?
- What parts of this work required human judgement?
- How did you validate AI-generated content?
If a supplier cannot answer these questions clearly, transparency is low.
Competency 2 — Recognising AI signature risks in drafts
AI has “tells.” AI-literate clients recognise them:
- overconfident conclusions
- beautiful structure but shallow reasoning
- generic phrasing appearing across sections
- absence of citations or verification
- repetition that feels “patterned”
- fabricated references
- improbable precision (“exactly 42% impact”)
- perfectly balanced options with no trade-offs
These signatures help public servants spot when a section needs deeper scrutiny.
Competency 3 — Understanding AI limitations
Executives don’t need model science; they need risk insight. AI limitations include:
- hallucinations (confidently incorrect statements)
- weak handling of nuance
- inability to perceive political or cultural context
- limited ability to reason about competing priorities
- over-reliance on the structure of the input prompt
- instability, different outputs on different runs
AI literacy means knowing what AI cannot do, so humans remain the governors of judgement.
Competency 4 — Evaluating AI-enabled work quality
Even if AI contributes to a document, humans remain accountable. AI-literate clients apply simple heuristics:
- Does the argument hang together?
- Are claims supported by evidence?
- Do recommendations logically follow from analysis?
- Are risks and trade-offs explicitly identified?
- Is the reasoning traceable and explainable?
These techniques mirror the APS’s expectations for explainable AI.
Competency 5 — Managing AI-enabled consultants
Clients must know how to manage consultants in an AI-enabled way.
This includes:
- requiring AI Use Statements
- applying the AI Disclosure Spectrum
- using SAFE-AI to test data protection
- assessing AI Content Ratio (AICR)
- evaluating pricing integrity (Article 8)
- requesting evidence of oversight
- holding suppliers accountable for outputs
AI literacy turns procurement into a proactive governance function rather than a passive recipient of supplier claims.
The AI literacy gaps inside government
Across agencies, four literacy gaps are emerging:
1. Executive literacy gap
SES leaders struggle to detect AI-shaped reasoning in policy options or submissions.
2. Procurement literacy gap
Evaluators often cannot distinguish genuine AI capability from marketing claims.
3. Delivery literacy gap
Policy, program and project teams may not know which tasks AI accelerates — or where human judgement remains essential.
4. Assurance literacy gap
Internal reviewers may not recognise AI-generated hallucinations or omissions that affect trust in advice.
These gaps undermine value for money, risk management, and the APS’s ability to meet the expectations of the AI Plan 2025. Once literacy
A practical AI literacy toolkit for government
AI literacy develops fastest when tools are simple and repeatable. Agencies can adopt:
1. AI Use Checklist
To require suppliers to disclose:
- where AI was used
- what data was uploaded
- governance steps taken
- human review processes
2. AI Draft Review Checklist
For evaluating internal or external AI-shaped documents:
- check logic
- check evidence
- check traceability
- check omissions
- check model limitations
3. SAFE-AI
A structured model for assessing consultant data protection practices.
4. AI Disclosure Spectrum
To classify transparency levels (Silent → Declarative → Operational).
5. AI Content Ratio (AICR)
To quantify the amount of work automated.
6. Understand → Control → Verify Model
To integrate literacy into contract management.
7. Executive Micro-Learning Modules
Short briefings for SES on:
- explainability
- oversight
- model limitations
- risk interpretation
This toolkit turns literacy into practice.
Conclusion: AI literacy is now leadership literacy
The public sector cannot rely on suppliers to manage all AI risks, nor can it rely on final deliverables to signal their own quality. AI makes documents look complete even when reasoning is shallow or flawed.
AI literacy is not a technical skill; it is a capability that strengthens decision-making, procurement, evaluation and risk management.
Agencies do not need AI experts. They need AI-literate leaders and teams who can:
- ask good questions
- evaluate AI-shaped work
- recognise risks
- insist on transparency
- ensure judgement remains human
- In an AI-enabled public sector, literacy is the foundation of confident, capable decision-making.
Next up in the series
Protecting your data when consultants use AI.

