As the Landscape Changes, We’re Growing More Lenient Toward Algorithm Mistakes

As the Landscape Changes, We’re Growing More Lenient Toward Algorithm Mistakes

New inventions — like the printing press, magnetic compasses, steam engines, calculators and the internet — can create radical shifts in our everyday lives. Many of these new technologies were met with some degree of skepticism by those who lived through the transition.

Over the past 30 years alone, we’ve seen our relationship with the internet transform dramatically — it’s fundamentally changed how we search for, remember and learn information; how we evaluate and trust information; and, more recently, how we encounter and interact with artificial intelligence.




Read more:
AI can be responsibly integrated into classrooms by answering the ‘why’ and ‘when’


As new technologies and ways of doing things emerge, we fixate on their flaws and errors, and judge them more harshly than what we’re already familiar with. These apprehensions are not unwarranted. Today, important debates continue around accountability, ethics, transparency and fairness in the use of AI.

But how much of our aversion is really about the technology itself, and how much is driven by the discomfort of moving away from the status quo?

Algorithm aversion

As a PhD student in cognitive psychology, I study human judgment and decision-making, with a focus on how we evaluate mistakes, and how context, like the status quo, can shape our biases.

In my research with cognitive psychologists Jonathan A. Fugelsang and Derek J. Koehler, we tested how people evaluate errors made by humans versus algorithms depending on what they saw as the norm.

Despite algorithms’ track record of consistently outperforming humans in several prediction and judgment tasks, people have been hesitant to use algorithms. This mistrust goes back as far as the 1950s, when psychologist Paul Meehl argued that simple statistical models could make more accurate predictions than trained clinicians. Yet the response from experts at the time was far from welcoming. As psychologist Daniel Kahneman would later put it, the reaction was marked by “hostility and disbelief.”

That early resistance continues to echo in more recent research, which shows that when an algorithm makes a mistake, people tend to judge and punish it more harshly than when a human makes the same error. This phenomenon is now called algorithm aversion.

Algorithm aversion is when people judge an algorithm more harshly for the same mistake a human might make.
(Alex Shuper/Unsplash+)

Defining convention

We examined this bias by asking participants to evaluate mistakes made by either a human or by an algorithm. Before seeing the error, we told them which option was considered the conventional one — described as being historically dominant, widely used and typically relied upon in that scenario.

In half the trials, the task was said to be traditionally done by humans. In the other half, we reversed the roles, indicating that the role had traditionally been done by an algorithmic agent.

When humans were framed as the norm, people judged algorithmic errors more harshly. But when algorithms were framed as the norm, people’s evaluations shifted. They were now more forgiving of algorithmic mistakes, and harsher on humans making the same mistakes.

This suggests that people’s reactions may have less to do with algorithms versus humans, and more to do with whether something fits their mental picture of how things are supposed to be done. In other words, we’re more tolerant when the culprit is also the status quo. And we’re tougher on mistakes that come from what feels new or unfamiliar.

Intuition, nuance and skepticism

Yet, explanations for algorithm aversion continue to make intuitive sense. A human decision-maker, for instance, might be able to consider the nuances of real life like an algorithmic system never could.

But is this aversion really just about the non-human limitations of algorithmic technologies? Or is part of the resistance rooted in something broader — something about shifting from one status quo to another?

These questions, viewed through the historic lens of human relationships with past technologies, led us to revisit common assumptions about why people are often skeptical and less forgiving of algorithms.

Signs of that transition are all around us. After all, debates around AI haven’t slowed its adoption. And for a few decades now, algorithmic tech has already been helping us navigate traffic, find dates, detect fraud, recommend music and movies, and even help diagnose illnesses.

And while many studies document algorithm aversion, recent ones also show algorithm appreciation — where people actually prefer or defer to algorithmic advice in a variety of different situations.

We’re increasingly leaning on algorithms, especially when they’re faster, easier and appear just as (or more) reliable. As that reliance grows, a shift in how we view technologies like AI — and their errors — seems inevitable.

This shift from outright aversion to increasing tolerance suggests that how we judge mistakes may have less to do with who makes them and more to do with what we’re accustomed to.

The post “As the status quo shifts, we’re becoming more forgiving when algorithms mess up” by Hamza Tariq, PhD Student, Cognitive Psychology, University of Waterloo was published on 08/10/2025 by theconversation.com