A year ago, the Commonwealth government established a policy requiring most federal agencies to publish “AI transparency statements” on their websites by February 2025. These statements were meant to explain how agencies use artificial intelligence (AI), in what domains and with what safeguards.
The stated goal was to build public trust in government use of AI – without resorting to legislation. Six months after the deadline, early results from our research (to be published in full later this year) suggest this policy is not working.
We looked at 224 agencies and found only 29 had easily identifiable AI transparency statements. A deeper search found 101 links to statements.
That adds up to a compliance rate of around 45%, although for some agencies (such as defence, intelligence and corporate agencies) publishing a statement is recommended rather than required, and it is possible some agencies could share the same statement. Still, these tentative early findings raise serious questions about the effectiveness of Australia’s “soft-touch” approach to AI governance in the public sector.
Why AI transparency matters
Public trust in AI in Australia is already low. The Commonwealth’s reluctance to legislate rules and safeguards for the use of automated decision making in the public sector – identified as a shortcoming by the Robodebt royal commission – makes transparency all the more critical.
The public expects government to be an exemplar of responsible AI use. Yet the very policy designed to ensure transparency seems to be ignored by many agencies.
With the government also signalling a reluctance to pass economy-wide AI rules, good practice in government could also encourage action from a disoriented private sector. A recent study found 78% of corporations are “aware” of responsible AI practices, but only 29% have actually “implemented” them.
Transparency statements
The transparency statement requirement is the key binding obligation under the Digital Transformation Agency’s policy for the responsible use of AI in government.
Agencies must also appoint an “accountable [AI] official” who is meant to be responsible for AI use. The transparency statements are supposed to be clear, consistent, and easy to find – ideally linked from the agency’s homepage.
In our research, conducted in collaboration with the Office of the Australian Information Commissioner, we sought to identify these statements, using a combination of automated combing through websites, targeted Google searches, and manual inspection of the list of federal entities facilitated by the information commissioner. This included both agencies and departments strictly bound by the policy and those invited to comply voluntarily.
But we found only a few statements were accessible from the agency’s landing page. Many were buried deep in subdomains or required complex manual searching. Among agencies for which publishing a statement was recommended, rather than required, we struggled to find any.
More concerningly, there were many for which we could not find the statement even where it was required. This may just be a technical failure, but given the effort we put in, it suggests a policy failure.
A toothless requirement
The transparency statement requirement is binding in theory but toothless in practice. There are no penalties for agencies that fail to comply. There is also no open central register to track who has or has not published a statement.
The result is a fragmented, inconsistent landscape that undermines the very trust the policy was meant to build. And the public has no way to understand – or challenge – how AI is being used in decisions that affect their lives.
How other countries do it
In the United Kingdom, the government established a mandatory AI register. But as the Guardian reported in late 2024, many departments failed to list their AI use, despite the legal requirement to do so.
The situation seems to have slightly improved this year, but still many high-risk AI systems identified by UK civil society groups are still not published on the UK government’s own register.
The United States has taken a firmer stance. Despite anti-regulation rhetoric from the White House, the government has so far maintained its binding commitments to AI transparency and mitigation of risk.
Federal agencies are required to assess and publicly register their AI systems. If they fail to do so, the rules say they must stop using them.
Towards responsible use of AI
In the next phase of our research, we will analyse the content of the transparency statements we did find.
Are they meaningful? Do they disclose risks, safeguards and governance structures? Or are they vague and perfunctory? Early indications suggest wide variation in quality.
If governments are serious about responsible AI, they must enforce their own policies. If determined university researchers cannot easily find the statements – even assuming they are somewhere deep on the website – that cannot be called transparency.
The authors wish to thank Shuxuan (Annie) Luo for her contribution to this research.
The post “Most Australian government agencies aren’t transparent about how they use AI” by José-Miguel Bello y Villarino, Senior Research Fellow, Sydney Law School, University of Sydney was published on 10/26/2025 by theconversation.com


















