Is AI going to take over the world? Have scientists created an artificial lifeform that can think on its own? Is it going to replace all our jobs, even creative ones, like doctors, teachers and care workers? Are we about to enter an age where computers are better than humans at everything?
The answers, as the authors of The AI Con stress, are “no”, “they wish”, “LOL” and “definitely not”.
The AI Con: How To Fight Big Tech’s Hype and Create the Future We Want – Emily M. Bender and Alex Hanna (Bodley Head)
Artificial intelligence is a marketing term as much as a distinct set of computational architectures and techniques. AI has become a magic word for entrepreneurs to attract startup capital for dubious schemes, an incantation deployed by managers to instantly achieve the status of future-forward leaders.
In a mere two letters, it conjures a vision of automated factories and robotic overlords, a utopia of leisure or a dystopia of servitude, depending on your point of view. It is not just technology, but a powerful vision of how society should function and what our future should look like.
In this sense, AI doesn’t need to work for it to work. The accuracy of a large language model may be doubtful, the productivity of an AI office assistant may be claimed rather than demonstrated, but this bundle of technologies, companies and claims can still alter the terrain of journalism, education, healthcare, service work and our broader sociocultural landscape.
Pop goes the bubble
For Emily M. Bender and Alex Hanna, the AI hype bubble needs to be popped.
Bender is a linguistics professor at the University of Washington, who has become a prominent technology critic. Hanna is a sociologist and former employee of Google, who is now the director of research at the Distributed AI Research Institute. After teaming up to mock AI boosters in their popular podcast, Mystery AI Hype Theater 3000, they have distilled their insights into a book written for a general audience. They meet the unstoppable force of AI hype with immovable scepticism.
Step one in this program is grasping how AI models work. Bender and Hanna do an excellent job of decoding technical terms and unpacking the “black box” of machine learning for lay people.
Driving this wedge between hype and reality, between assertions and operations, is a recurring theme across the pages of The AI Con, and one that should gradually erode readers’ trust in the tech industry. The book outlines the strategic deceptions employed by powerful corporations to reduce friction and accumulate capital. If the barrage of examples tends to blur together, the sense of technical bullshit lingers.
What is intelligence? A famous and highly cited paper co-written by Bender asserts that large language models are simply “stochastic parrots”, drawing on training data to predict which set of tokens (i.e. words) is most likely to follow the prompt given by a user. Harvesting millions of crawled websites, the model can regurgitate “the moon” after “the cow jumped over”, albeit in much more sophisticated variants.
Rather than actually understanding a concept in all its social, cultural and political contexts, large language models carry out pattern matching: an illusion of thinking.
But I would suggest that, in many domains, a simulation of thinking is sufficient, as it is met halfway by those engaging with it. Users project agency onto models via the well-known Eliza effect, imparting intelligence to the simulation.
Management are pinning their hopes on this simulation. They view automation as a way to streamline their organisations and not be “left behind”. This powerful vision of early adopters vs extinct dinosaurs is one we see repeatedly with the advent of new technologies – and one that benefits the tech industry.
In this sense, poking holes in the “intelligence” of artificial intelligence is a losing move, missing the social and financial investment that wants this technology to work. “Start with AI for every task. No matter how small, try using an AI tool first,” commanded DuoLingo’s chief engineering officer in a recent message to all employees. Duolingo has joined Fiverr, Shopify, IBM and a slew of other companies proclaiming their “AI first” approach.

Kingston School of Art/https://betterimagesofai.org, CC BY
Shapeshifting technology
The AI Con is strongest when it looks beyond or around the technologies to the ecosystem surrounding them, a perspective I have also argued is immensely helpful. By understanding the corporations, actors, business models and stakeholders involved in a model’s production, we can evaluate where it comes from, its purpose, its strengths and weaknesses, and what all this might mean downstream for its possible uses and implications. “Who benefits from this technology, who is harmed, and what recourse do they have?” is a solid starting point, Bender and Hanna suggest.
These basic but important questions extract us from the weeds of technical debate – how does AI function, how accurate or “good” is it really, how can we possibly understand this complexity as non-engineers? – and give us a critical perspective. They place the onus on industry to explain, rather than users to adapt or be rendered superfluous.
We don’t need to be able to explain technical concepts like backpropagation or diffusion to grasp that AI technologies can undermine fair work, perpetuate racial and gender stereotypes, and exacerbate environmental crises. The hype around AI means to distract us from these concrete effects, to trivialise them and thus encourage us to ignore them.

University of Washington
As Bender and Hanna explain, AI boosters and AI doomers are really two sides of the same coin. Conjuring up nightmare scenarios of self-replicating AI terminating humanity or claiming sentient machines will usher us into a posthuman paradise are, in the end, the same thing. They place a religious-like faith in the capabilities of technology, which dominates debate, allowing tech companies to retain control of AI’s future development.
The risk of AI is not potential doom in the future, à la the nuclear threat during the Cold War, but the quieter and more significant harm to real people in the present. The authors explain that AI is more like a panopticon “that allows a single prison warden to keep track of hundreds of prisoners at once”, or the “surveillance dragnets that track marginalised groups in the West”, or a “toxic waste, salting the earth of a Superfund site”, or a “scabbing worker, crossing the picket line at the behest of an employer who wants to signal to the picketers that they are disposable. The totality of systems sold as AI are these things, rolled into one.”
A decade ago, with another “game-changing” technology, author Ian Bogost observed that
rather than utopia or dystopia, we usually end up with something less dramatic yet more disappointing. Robots neither serve human masters nor destroy us in a dramatic genocide, but slowly dismantle our livelihoods while sparing our lives.
The pattern repeats. As AI matures (to some degree) and is adopted by organisations, it moves from innovation to infrastructure, from magic to mechanism. Grand promises never materialise. Instead, society endures a tougher, bleaker future. Workers feel more pressure; surveillance is normalised; truth is muddied with post-truth; the marginal become more vulnerable; the planet gets hotter.
Technology, in this sense, is a shapeshifter: the outward form constantly changes, yet the inner logic remains the same. It exploits labour and nature, extracts value, centralises wealth, and protects the power and status of the already-powerful.
Co-opting critique
In The New Spirit of Capitalism, sociologists Luc Boltanski and Eve Chiapello demonstrate how capitalism has mutated over time, folding critiques back into its DNA.
After enduring a series of blows around alienation and automation in the 1960s, capitalism moved from a hierarchical Fordist mode of production to a more flexible form of self-management over the next two decades. It began to favour “just in time” production, done in smaller teams, that (ostensibly) embraced the creativity and ingenuity of each individual. Neoliberalism offered “freedom”, but at a price. Organisations adapted; concessions were made; critique was defused.

Verso Books
AI continues this form of co-option. Indeed, the current moment can be described as the end of the first wave of critical AI. In the last five years, tech titans have released a series of bigger and “better” models, with both the public and scholars focusing largely on generative and “foundation” models: ChatGPT, StableDiffusion, Midjourney, Gemini, DeepSeek, and so on.
Scholars have heavily criticised aspects of these models – my own work has explored truth claims, generative hate, ethics washing and other issues. Much work focused on bias: the way in which training data reproduces gender stereotypes, racial inequality, religious bigotry, western epistemologies, and so on.
Much of this work is excellent and seems to have filtered into the public consciousness, based on conversations I’ve had at workshops and events. However, its flagging of such issues allows tech companies to practise issue resolving. If the accuracy of a facial-recognition system is lower with Black faces, add more Black faces to the training set. If the model is accused of English dominance, fork out some money to produce data on “low-resource” languages.
Companies like Anthropic now regularly carry out “red teaming” exercises designed to highlight hidden biases in models. Companies then “fix” or mitigate these issues. But due to the massive size of the data sets, these tend to be band-aid solutions, superficial rather than structural tweaks.
For instance, soon after launching, AI image generators were under pressure for not being “diverse” enough. In response, OpenAI invented a technique to “more accurately reflect the diversity of the world’s population”. Researchers discovered this technique was simply tacking on additional hidden prompts (e.g. “Asian”, “Black”) to user prompts. Google’s Gemini model also seems to have adopted this, which resulted in a backlash when images of Vikings or Nazis had South Asian or Native American features.
The point here is not whether AI models are racist or historically inaccurate or “woke”, but that models are political and never disinterested. Harder questions about how culture is made computational, or what kind of truths we want as society, are never broached and therefore never worked through systematically.
Such questions are certainly broader and less “pointy” than bias, but also less amenable to being translated into a problem for a coder to resolve.
What next?
How, then, should those outside the academy respond to AI? The past few years have seen a flurry of workshops, seminars and professional development initiatives. These range from “gee whiz” tours of AI features for the workplace, to sober discussions of risks and ethics, to hastily organised all-hands meetings debating how to respond now, and next month, and the month after that.

Will Toft/alex-hanna.com, CC BY
Bender and Hanna wrap up their book with their own responses. Many of these, like their questions about how models work and who benefits, are simple but fundamental, offering a strong starting point for organisational engagement.
For the technosceptical duo, refusal is also clearly an option, though individuals will obviously have vastly different degrees of agency when it comes to opting out of models and pushing back on adoption strategies. Refusal of AI, as with many technologies that have come before it, often relies to some extent on privilege. The six-figure consultant or coder will have discretion that the gig worker or service worker cannot exercise without penalties or punishments.
If refusal is fraught at the individual level, it seems more viable and sustainable at a cultural level. Bender and Hanna suggest generative AI be responded to with mockery: companies who employ it should be derided as cheap or tacky.
The cultural backlash against AI is already in full swing. Soundtracks on YouTube are increasingly labelled “No AI”. Artists have launched campaigns and hashtags, stressing their creations are “100% human-made”.
These moves are attempts to establish a cultural consensus that AI-generated material is derivative and exploitative. And yet, if these moves offer some hope, they are swimming against the swift current of enshittification. AI slop means faster and cheaper content creation, and the technical and financial logic of online platforms – virality, engagement, monetisation – will always create a race to the bottom.
The extent to which the vision offered by big tech will be accepted, how far AI technologies will be integrated or mandated, how much individuals and communities will push back against them – these are still open questions. In many ways, Bender and Hanna successfully demonstrate that AI is a con. It fails at productivity and intelligence, while the hype launders a series of transformations that harm workers, exacerbate inequality and damage the environment.
Yet such consequences have accompanied previous technologies – fossil fuels, private cars, factory automation – and hardly dented their uptake and transformation of society. So while praise goes to Bender and Hanna for a book that shows “how to fight big tech’s hype and create the future we want”, the issue of AI resonates, for me, with Karl Marx’s observation that people “make their own history, but they do not make it just as they please”.

The post “Is AI a con? A new book punctures the hype and proposes some ways to resist” by Luke Munn, Research Fellow, Digital Cultures & Societies, The University of Queensland was published on 06/23/2025 by theconversation.com