The reasons why government ministers should avoid using ChatGPT

The reasons why government ministers should avoid using ChatGPT

The news that Peter Kyle, secretary of state for science and technology, had been using ChatGPT for policy advice prompted some difficult questions.

Kyle apparently used the AI tool to draft speeches and even asked it for suggestions about which podcasts he should appear on. But he also sought advice on his policy work, apparently including questions on why businesses in the UK are not adopting AI more readily. He asked the tool to define what “digital inclusion” means.

A spokesperson for Kyle said his use of the tool “does not substitute comprehensive advice he routinely receives from officials” but we have to wonder whether any use at all is suitable. Does ChatGPT give good enough advice to have any role in decisions that could affect the lives of millions of people?

Underpinned by our research on AI and public policy, we find that ChatGPT is uniquely flawed as a device for government ministers in several ways, including the fact that it is backward looking, when governments really should be looking to the future.

1. Looking back instead of forward

Where government ministers should ideally be seeking new, fresh ideas with a view to the future, the information that comes out of an AI chatbot is, by definition, from the past. It’s a very effective way of summarising what has already been thought of but not equipped to suggest genuinely new ways of thinking.

ChatGPT responses are not based on all past equally. The ever-increasing digitisation over the years steers ChatGPT’s pattern-finding mechanism to the recent past. In other words, when asked by a minister to provide advice on a specific problem in the UK, ChatGPT’s responses would be more anchored in documents produced in the UK in recent years.

And notably, in Kyle’s case, that means that not only will a Labour minister be accessing information from the past, but he’ll be advised by an algorithm leaning heavily on advice given to Conservative governments. That’s not the end of the world, of course, but it’s questionable given that Labour won an election by promising change.

Kyle – or any other minister consulting ChatGPT – will be given information grounded in the policy traditions reflecting the Rishi Sunak, Boris Johnson, Theresa May and David Cameron eras. They are less likely to receive information grounded in the thinking of the New Labour years, which were longer ago.

If Kyle asks what digital inclusion means, the answer is more likely to reflect what these Tory administrations think it means rather than thoughts of governments more aligned with his values.

Amid all the enthusiasm within Labour to leverage AI, this may be one reason for them to distance themselves from using ChatGPT for policy advice. They risk Tory policy – one they so like to criticise – zombieing into their own.

2. Prejudice

ChatGPT has been accused of having “hallucinations” – generating, uncanny, plausible-sounding falsehoods.

There is a simple technical explanation for this, as alluded to in a recent study. The “truth model” for ChatGPT – as for any large language model – is one of consensus. It models truth as something that everyone agrees to be true. For ChatGPT, its truth is simply the consensus of views expressed across the data it has been trained on.

This is very different from the human model of truth, which is based on correspondence. For us, the truth is what best corresponds to reality in the physical world. The divergence between the truth models could be consequential in many ways.

For example, TV licensing, a model that operates only within a few nations, would not figure prominently within ChatGPT’s consensus model built over a global dataset. Thus, ChatGPT’s suggestions on broadcast media policy are unlikely to substantially touch upon TV licensing.

Peter Kyle has used chatGPT in his work.

Besides explaining hallucinations, divergences in truth models have other consequences. Social prejudices, including sexism and racism, are easily internalised under the consensus model.

Consider seeking ChatGPT advice on improving conditions for construction workers, a historically male dominated profession. ChatGPT’s consensus model could blind it from considerations important to women.

The correspondence model of truth enables humans to continuously engage in moral deliberation and change. A human policy expert advising Peter Kyle could illuminate him on pertinent real-world complexities.

For example, they might highlight how recent successes in AI-based diagnostics could help tackle distinct aspects of the UK’s disease burden in the knowledge that one of Labour’s priorities is to cut NHS waiting times.

3. Pleasing narratives

Tools such as ChatGPT are designed to give engaging, elegant narratives when responding to questions. ChatGPT managed this partly by weeding out bad quality text from its training data (with the help of underpaid workers in Africa).

These poetic pieces of writing work well for engagement and help OpenAI to keep users hooked on their product. Humans enjoy a good story, and particularly one that offers to solve a problem. Our shared evolutionary history has made us story-tellers and story-listeners unlike any other species.

But the real world is not a story. It is a constant swirl of political complexities, social contradictions and moral dilemmas, many of which can never be resolved. The real world and the decisions government ministers have to make on our behalf are complex.

There are competing interests and irreconcilable differences. Rarely is there a neat answer. ChatGPT’s penchant for pleasing narratives stands at odds with the public policy imperative to address messy real-world conditions.

The very features that make ChatGPT a useful tool in many contexts are squarely incompatible with the considerations of public policy, a realm that seeks to make political choices to address the needs of a country’s citizens.

The post “Why ChatGPT is a uniquely terrible tool for government ministers” by Deepak Padmanabhan, Senior Lecturer in AI, Queen’s University Belfast was published on 04/04/2025 by theconversation.com