The term “black swan” refers to a shocking event on nobody’s radar until it actually happens. This has become a byword in risk analysis since a book called The Black Swan by Nassim Nicholas Taleb was published in 2007. A frequently cited example is the 9/11 attacks.
Fewer people have heard of “grey swans”. Derived from Taleb’s work, grey swans are rare but more foreseeable events. That is, things we know could have a massive impact, but we don’t (or won’t) adequately prepare for.
COVID was a good example: precedents for a global pandemic existed, but the world was caught off guard anyway.
Although he sometimes uses the term, Taleb doesn’t appear to be a big fan of grey swans. He’s previously expressed frustration that his concepts are often misused, which can lead to sloppy thinking about the deeper issues of truly unforeseeable risks.
But it’s hard to deny there is a spectrum of predictability, and it’s easier to see some major shocks coming. Perhaps nowhere is this more obvious than in the world of artificial intelligence (AI).
Putting our eggs in one basket
Increasingly, the future of the global economy and human thriving has become tied to a single technological story: the AI revolution. It has turned philosophical questions about risk into a multitrillion-dollar dilemma about how we align ourselves with possible futures.
US tech company Nvidia, which dominates the market for AI chips, recently surpassed US$5 trillion (about A$7.7 trillion) in market value. The “Magnificent Seven” US tech stocks – Amazon, Alphabet (Google), Apple, Meta, Microsoft, Nvidia and Tesla – now make up about 40% of the S&P 500 stock index.
The impact of a collapse for these companies – and a stock market bust – would be devastating at a global level, not just financially but also in terms of dashed hopes for progress.
Lee Jin-man/AP
AI’s grey swans
There are three broad categories of risk – beyond the economic realm – that could bring the AI euphoria to an abrupt halt. They’re grey swans because we can see them coming but arguably don’t (or won’t) prepare for them.
1. Security and terror shocks
AI’s ability to generate code, malicious plans and convincing fake media makes it a force multiplier for bad actors. Cheap, open models could help design drone swarms, toxins or cyber attacks. Deepfakes could spoof military commands or spread panic through fake broadcasts.
Arguably, the closest of these risks to a “white swan” – a foreseeable risk with relatively predictable consequences – stems from China’s aggression toward Taiwan.
The world’s biggest AI firms depend heavily on Taiwan’s semiconductor industry for the manufacture of advanced chips. Any conflict or blockade would freeze global progress overnight.
2. Legal shocks
Some AI firms have already been sued for allegedly using text and images scraped from the internet to train their models.
One of the best-known examples is the ongoing case of The New York Times versus OpenAI, but there are many similar disputes around the world.
If a major court were to rule that such use counts as commercial exploitation, it could unleash enormous damages claims from publishers, artists and brands.
A few landmark legal rulings could force major AI companies to press pause on developing their models further – effectively halting the AI build-out.
3. One breakthrough too many: innovation shocks
Innovation is usually celebrated, but for companies investing in AI, it could be fatal. New AI technology that autonomously manipulates markets (or even news that one is already doing so) would make current financial security systems obsolete.
And an advanced, open-source, free AI model could easily vaporise the profits of today’s industry leaders. We got a glimpse of this possibility in January’s DeepSeek dip, when details about a relatively cheaper, more efficient AI model developed in China caused US tech stocks to plummet.

Seth Wenig/AP
Why we struggle to prepare for grey swans
Risk analysts, particularly in finance, often talk in terms of historical data. Statistics can give a reassuring illusion of consistency and control. But the future doesn’t always behave like the past.
The wise among us apply reason to carefully confirmed facts and are sceptical of market narratives.
Deeper causes are psychological: our minds encode things efficiently, often relying on one symbol to represent very complex phenomena.
It takes us a long time to remodel our representations of the world into believing a looming big risk is worth taking action over – as we’ve seen with the world’s slow response to climate change.
How can we deal with grey swans?
Staying aware of risks is important. But what matters most isn’t prediction. We need to design for a deeper sort of resilience that Taleb calls “antifragility”.
Taleb argues systems should be built to withstand – or even benefit from – shocks, rather than rely on perfect foresight.
For policymakers, this means ensuring regulation, supply chains and institutions are built to survive a range of major shocks. For individuals, it means diversifying our bets, keeping options open and resisting the illusion that history can tell us everything.
Above all, the biggest problem with the AI boom is its speed. It is reshaping the global risk landscape faster than we can chart its grey swans. Some may collide and cause spectacular destruction before we can react.
The post “Could a ‘grey swan’ event bring down the AI revolution? Here are 3 risks we should be preparing for” by Cameron Shackell, Sessional Academic, School of Information Systems, Queensland University of Technology was published on 11/05/2025 by theconversation.com



















