Generative AI: Why It’s Misleading to Think of It as a ‘Word Calculator’ – 5 Key Reasons

Generative AI: Why It’s Misleading to Think of It as a ‘Word Calculator’ – 5 Key Reasons

Last year I attended a panel on generative AI in education. In a memorable moment, one presenter asked: “What’s the big deal? Generative AI is like a calculator. It’s just a tool.”

The analogy is an increasingly common one. OpenAI chief executive Sam Altman himself has referred to ChatGPT as “a calculator for words” and compared comments on the new technology to reactions to the arrival of the calculator.

People said, ‘We’ve got to ban these because people will just cheat on their homework. If people don’t need to calculate a sine function by hand again […] then mathematical education is over.’

However, generative AI systems are not calculators. Treating them like calculators obscures what they are, what they do, and whom they serve. This easy analogy simplifies a controversial technology and ignores five crucial differences from technologies of the past.

1. Calculators do not hallucinate or persuade

Calculators compute functions from clearly defined inputs. You punch in 888 ÷ 8 and get one correct answer: 111.

This output is bounded and unchangeable. Calculators do not infer, guess, hallucinate or persuade.

They do not add add fake or unwanted elements to the answer. They do not fabricate legal cases or tell people to “please die”.

2. Calculators do not pose fundamental ethical dilemmas

Calculators don’t raise fundamental ethical dilemmas.

Making ChatGPT involved workers in Kenya sifting through irreversibly traumatising content for a dollar or two an hour, for example. Calculators didn’t need that.

After the financial crisis in Venezuela, an AI data-labelling company saw an opportunity to snap up cheap labour with exploitative employment models. Calculators didn’t need that, either.

Calculators didn’t require vast new power plants to be built, or compete with humans for water as AI data centres are doing in some of the driest parts of the world.

Calculators didn’t need new infrastructure to be built. The calculator industry didn’t see a huge mining push such as the one currently driving rapacious copper and lithium extraction as in the lands of the Atacameños in Chile.

3. Calculators do not undermine autonomy

Calculators did not have the potential to become an “autocomplete for life”. They never offered to make every decision for you, from what to eat and where to travel to when to kiss your date.

Calculators did not challenge our ability to think critically. Generative AI, however, has been shown to erode independent reasoning and increase “cognitive offloading”. Over time, reliance on these systems risks placing the power to make everyday decisions in the hands of opaque corporate systems.

4. Calculators do not have social and linguistic bias

Calculators do not reproduce the hierarchies of human language and culture. Generative AI, however, is trained on data that reflects centuries of unequal power relations, and its outputs mirror those inequities.

Language models inherit and reinforce the prestige of dominant linguistic forms, while sidelining or erasing less privileged ones.

Tools such as ChatGPT handle mainstream English, but routinely reword, mislabel, or erase other world Englishes.

While projects exist that attempt to tackle the exclusion of minoritised voices from technological development, generative AI’s bias for mainstream English is worryingly pronounced.

5. Calculators are not ‘everything machines’

Unlike calculators, language models don’t operate within a narrow domain such as mathematics. Instead they have the potential to entangle themselves in everything: perception, cognition, affect and interaction.

Language models can be “agents”, “companions”, “influencers”, “therapists”, and “boyfriends”. This is a key difference between generative AI and calculators.

While calculators help with arithmetic, generative AI may engage in both transactional and interactional functions. In one sitting, a chatbot can help you edit your novel, write up code for a new app, and provide a detailed psychological profile of someone you think you like.

Staying critical

The calculator analogy makes language models and so-called “copilots”, “tutors”, and “agents” sound harmless. It gives permission for uncritical adoption and suggests technology can fix all the challenges we face as a society.

It also perfectly suits the platforms that make and distribute generative AI systems. A neutral tool needs no accountability, no audits, no shared governance.

But as we have seen, generative AI is not like a calculator. It does not simply crunch numbers or produce bounded outputs.

Understanding what generative AI is really like requires rigorous critical thinking. The kind that equips us to confront the consequences of “moving fast and breaking things”. The kind that can help us decide whether the breakage is worth the cost.

The post “Generative AI is not a ‘calculator for words’. 5 reasons why this idea is misleading” by Celeste Rodriguez Louro, Associate Professor, Chair of Linguistics and Director of Language Lab, The University of Western Australia was published on 08/18/2025 by theconversation.com