<p><strong>AI Apologies: Will They Strip the ‘Canadian Sorry’ of Its Heart?</strong></p>

AI Apologies: Will They Strip the ‘Canadian Sorry’ of Its Heart?

It is a stereotype that Canadians apologize for everything. We say sorry when you bump into us. We say sorry for the weather. But as we trudge through the grey days of winter, that national instinct for politeness hits a wall of fatigue.

The temptation is obvious. With a single click, Gmail’s “Help me write” or ChatGPT can draft a polite decline to an invitation or a heartfelt thank you for a holiday sweater you’ll never wear.

It’s efficient. It’s polite. It’s grammatically perfect.

It’s also a trap.

New research suggests that when we outsource our social interactions to AI, we are trading away our reputation. Using AI to manage your social life makes you seem less warm, less moral and significantly less trustworthy.


Learning a language is hard, but even native speakers get confused by pronunciation, connotations, definitions and etymology. The lexicon is constantly evolving, especially in the social media era, where new memes, catchphrases, slang, jargon and idioms are introduced at a rapid clip.
Slanguage, The Conversation Canada’s new series, dives into how language shapes the way we see the world and what it reveals about culture, power and belonging. Welcome to the wild and wonderful world of linguistics.


The trap of efficiency

In our consumer economy, we love automation. When I order a package, I don’t need a human to type the shipping notification; I just want the box on my doorstep. We accept — even demand — efficiency from brands.

But our friends are not brands, and our relationships are not transactions.

The new study published in Computers in Human Behavior — entitled “Negative Perceptions of Outsourcing to Artificial Intelligence” by British academic Scott Claessens and other researchers — suggests that emotional dynamics follow different rules than those shaping more practical situations. The researchers found that, while we tolerate AI assistance for technical tasks like writing code or planning a daily schedule, we punish it severely in social contexts.

When you use AI to write a love letter, an apology or a wedding vow, the recipient sees a lack of effort instead of a well-written text. In relationships, effort is a strong currency of care.

Less warm, less authentic

You might think you can hack this system by being honest. Perhaps you tell your friend: “I used ChatGPT to help me find the right words, but I edited it myself.”

Unfortunately, the data doesn’t indicate this is much of a solution.

Claessens’ work investigated a “best-case” scenario, where a user treated AI as a collaborative tool, employing it for ideas and feedback rather than verbatim copying, and was fully transparent about the process.

The researchers found that the social consequences of this approach are highly task-dependent: for socio-relational tasks like writing love letters, wedding vows or apology notes, participants still rated the sender as significantly less moral, less warm and less authentic than someone who didn’t use AI.

However, for instrumental or non-social tasks like writing computer code or dinner recipes, this collaborative and honest use of AI didn’t lead to negative perceptions of moral character or warmth, even if the user was still perceived as having expended less effort.

This creates a uniquely modern anxiety for the polite Canadian. We apologize to maintain social bonds. But if we use AI to craft that apology, we sever the very bond we are trying to hold onto. An apology generated by an algorithm, no matter how polished, signals that the relationship wasn’t worth the 20 minutes it would have taken to write it yourself.

Authentic inefficiency

This friction isn’t limited to text messages.

I’ve observed a similar pattern in my own preliminary research on consumer behaviour and AI-generated art. This work was conducted with Associate Prof. Ying Zhu at the University of British Columbian, Okanagan and will be presented at the American Marketing Association’s Winter Conference.

Consumers often reject excellent AI creations in creative arts fields because they lack the moral weight of human intent.

I believe we’re entering an era where inefficiency and imperfection will become premium products. Just as a flawed hand-knit scarf means more than a mass-produced, factory-made one, a clunky, typo-ridden text message from a friend is becoming more valuable than a sonnet written by a random internet language model.

The renowned “Canadian Sorry” is only meaningful because it represents a moment of humility, a pang of guilt, the effort used to find the right words. When we outsource this type of labour, we outsource the meaning too.

So as you tackle your inbox this winter, resist the urge to let the robot take the wheel for every case. Your clients might need the perfect email, but your friends and family certainly don’t. They want to know you cared enough to find the words yourself.

Joshua Gonzales does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

The post “Slanguage: How the use of AI for apologies could cause the ‘Canadian Sorry’ to lose its soul” by Joshua Gonzales, PhD Student in Management at the Lang School of Business and Economics, University of Guelph was published on 01/21/2026 by theconversation.com