Insights

Protecting Your Data When Consultants Use AI

  • Date 11 May 2026
  • Filed under Insights
Share

David Wilson

Written by


David Wilson

Public Sector AI Governance Series

Stage 3: Managing Information Risk

In the third part of our series, we explore the emerging information risks associated with consultant use of AI and how agencies can manage them in practice.

Public-sector agencies are increasingly receiving consulting deliverables that were created, summarised or drafted using generative AI tools. Sometimes this is disclosed transparently. Sometimes it is mentioned vaguely. Sometimes it is not mentioned at all.

Recent APS policy work, including the APS AI Plan 2025 and forthcoming ClauseBank requirements, signals a significant shift. Agencies will soon be expected to govern not only their own AI use, but also how external providers use AI on their behalf.

The result is a growing set of questions that procurement teams, CIOs, data governance officers and legal units need to answer quickly:

  • Where does your data go when a consultant uses AI?
  • Who sees it?
  • How long is it retained?
  • And could it end up training a model used by thousands of other customers?

The danger isn’t that consultants use AI; responsible AI use can accelerate delivery and reduce cost.

The danger is not knowing where your data goes when they do.

This article provides a practical, plain-language framework to help agencies protect their information when external consultants use AI tools without requiring technical expertise.

Why AI usage by consultants creates new data risks

AI tools introduce a set of risks that are fundamentally different from traditional consulting delivery. These risks are manageable, but only if agencies understand them and require appropriate safeguards. Five stand out.

1. Training leakage

This is the most well-known risk: information from an agency is inadvertently used to train, tune, or improve an external model.

Even when vendors claim, “we don’t use your data for training,” settings, plugins or staff behaviours may contradict that assurance.

 

2. Cross-Client Prompt Contamination

If consultants use the same AI workspace for multiple clients, fragments of one client’s information can influence another client’s outputs.

This is particularly problematic for sensitive policy, commercial-in-confidence or national security material.

 

3. Cloud Retention & Jurisdiction Risk

AI systems often store copies of prompts, embeddings or logs in cloud environments that may be:

  • outside Australia
  • outside PSPF-aligned jurisdictions
  • retained longer than intended

This creates sovereignty and compliance challenges.

 

 

4. Shadow AI Use by Consultant Staff

Even if a firm has official tools, consultants may use personal or unmanaged AI tools, including consumer accounts, apps and browser extensions:

For example, a consultant uploads a draft Cabinet briefing into a personal ChatGPT account to “clean up the writing.” The model retains fragments and uses them later for a different client. This is precisely the type of risk SAFE-AI is designed to prevent.

 

5. Loss of Auditability

AI-generated text, summaries, and recommendations can be challenging to trace.

If you don’t know:

  • what went in
  • which model processed it
  • who reviewed the output

Then you cannot verify the integrity of the work.

These risks don’t mean agencies should avoid AI; they mean agencies must govern it.

 

What agencies must require before any AI is used?

Before a consultant uses AI on your data, agencies should require precise, practical controls. These protections align closely with APS AI Plan 2025 expectations and digital sourcing reforms.

 

1. A Supplier AI Use Declaration

This prevents silent or partial disclosure. The declaration should specify:

  • which AI tools will be used
  • what data those tools process
  • where data is stored
  • whether the model ever trains on Agency inputs
  • how logs, embeddings and prompts are handled

If suppliers cannot answer these questions clearly, they are not ready to use AI in delivery.

 

2. Evidence of a Governed AI Environment

Mature consultants use only enterprise accounts with:

  • access logs
  • administrator controls
  • disabled training
  • approved plug-ins
  • policy-limited functionality

Personal accounts should be strictly prohibited.

 

3. Assurance That Your Data Will Not Train External Models

Agencies should require:

  • written confirmation of zero training
  • evidence of model settings
  • retention controls
  • isolation of Agency data
  • indemnification for misuse

This is non-negotiable.

 

4. Data Residency and Sovereignty Controls

Suppliers must clarify:

  • which cloud regions handle prompts
  • whether inference occurs offshore
  • whether logs are stored locally
  • whether data ever leaves Australia

This aligns with PSPF, Privacy Act obligations and the APS AI Plan.

 

5. Human-in-the-Loop Oversight

No AI output should reach an agency without human review.

This ensures:

  • quality control
  • accountability
  • reduction of hallucinations
  • preservation of evidentiary traceability

Agencies should be able to request the workflow behind any deliverable.

 

The SAFE-AI assessment framework

To help agencies move from abstract risk to practical assessment, the SAFE-AI framework offers a straightforward, non-technical method for evaluating how consultants handle government data.

It is a straightforward diagnostic tool designed specifically for public-sector use.

SAFE-AI is an interpretive, practitioner-focused model, not a formal standard, but it provides a consistent way to assess whether a supplier’s AI use is safe, governed and compliant.

 

S — Segregation

Is your data fully isolated so it cannot mix with other clients’ prompts, datasets or workspaces?

Segregation prevents accidental cross-client contamination and reduces the risk that sensitive information is reused or exposed through AI-generated outputs.

 

A — Access Control

Are consultants restricted to approved, enterprise-grade AI environments with proper oversight?

Strong access controls prevent the use of personal accounts, apps, or browser extensions — a significant source of data leakage.

 

F — Flow Restriction

Do you have complete visibility of where your data goes when AI is used?

This includes cloud regions, inference locations, storage of logs or embeddings, and any potential cross-border transfers.

 

E — Expiry

How long is your data retained, and who controls the retention settings?

Agencies should expect zero-retention or short-retention configurations to prevent unnecessary storage of sensitive material.

 

A — Auditability

Can the consultant provide an audit trail showing what was entered into an AI tool, what the model produced, and who reviewed it?

Auditability is essential for accountability, probity and evidence-based decision-making.

 

I — Indemnity

Does the contract clearly protect the agency if a supplier mishandles data or misuses AI?

Indemnity ensures responsibility remains with the consultancy, not shifted onto the agency, because a tool was involved.

 

SAFE-AI can be applied to any consultancy, regardless of size or technical sophistication.

Questions every agency should ask a consultant before they use AI

Procurement, legal and governance teams can use these seven plain-language questions to reveal how mature and safe a supplier really is:

1. Where exactly will our data go when your team uses AI?

2. Does any of our information train or tune a model?

3. What tools are your staff actually using day-to-day?

4. Can you show us your AI governance policy?

5. How do you guarantee our data won’t appear in another client’s work?

6. Who checks AI-generated content before it reaches us?

7. What is your incident response process if something leaks?

Any hesitation or vague response is a warning sign.

What to Put in the Contract: Practical Clause Language

The following clauses are not legal drafting, but rather simple, enforceable procurement requirements aligned with the APS AI Plan 2025.

 

1. Zero Training Clause

The supplier must not use any Agency data to train, fine-tune or calibrate internal or external AI models.

2. Approved Tools Only Clause

Only AI tools explicitly approved by the Agency in writing may be used in service delivery.

3. Data Residency Clause

All data processing, inference and storage must occur within agreed and compliant jurisdictions.

4. Logging & Auditability Clause

The supplier must maintain logs of all AI-assisted actions and provide them upon request.

5. Incident Reporting Clause

Any suspected leakage or unauthorised use of Agency information must be reported within 24 hours.

 

These clauses shift responsibility onto the supplier, where it belongs.

How agencies can detect unsafe or undisclosed AI use

Even with declarations, agencies should watch for the signs of “shadow AI use”:

  • sudden increases in consultant speed
  • inconsistent writing styles
  • repeated phrasing typical of AI models
  • deliverables summarising documents never supplied
  • absent or incomplete version histories
  • consultants unable to explain how an analysis was produced

These indicators do not necessarily indicate misuse, but they warrant follow-up questions.

 

The bottom line: protection enables confidence

AI use by consultants is not inherently risky.

What is risky is a lack of transparency, governance or accountability.

With the proper controls, AI can accelerate delivery, improve quality, and reduce costs without compromising the sovereignty, confidentiality, or integrity of government information.

Agencies do not need to become technical experts.

They need to ask the right questions, require clear safeguards and hold suppliers accountable.

AI doesn’t have to put your data at risk — but it does require you to ask better questions.

 

Next up in the series

How to Judge a Consultancy’s AI Maturity