There are many claims to sort through in the current era of ubiquitous artificial intelligence (AI) products, especially generative AI ones based on large language models or LLMs, such as ChatGPT, Copilot, Gemini and many, many others.
AI will change the world. AI will bring “astounding triumphs”. AI is overhyped, and the bubble is about to burst. AI will soon surpass human capabilities, and this “superintelligent” AI will kill us all.
If that last statement made you sit up and take notice, you’re not alone. The “godfather of AI”, computer scientist and Nobel laureate Geoffrey Hinton, has said there’s a 10–20% chance AI will lead to human extinction within the next three decades. An unsettling thought – but there’s no consensus if and how that might happen.
So we asked five experts: does AI pose an existential risk?
Three out of five said no. Here are their detailed answers.

The post “Does AI pose an existential risk? We asked 5 experts” by Aaron J. Snoswell, Senior Research Fellow in AI Accountability, Queensland University of Technology was published on 10/05/2025 by theconversation.com