A billboard tries to sell you something. So does a used car salesman. But no matter how smooth the pitch, you’re quite aware of the profit motive, and you can walk away at any time.
What if that pitch is invisible, plays to your unique fears and vanities, and is delivered in a voice that sounds like a trusted friend? Generative AI has changed the equation of persuasion entirely: chatbots can now deliver a personalized, adaptive and targeted message, informed by the most intimate details of your life.
Large language models (LLMs) can hyper-target messages by drawing from your social media posts and photos. They can mine hundreds of previous chatbot conversations in which you asked for relationship advice, discussed your parenting fails and shared your health concerns and financial woes. They can also learn from each interaction, refining their manipulation in real time, targeting your unique and individual tastes, preferences and vulnerabilities.
Studies show this kind of personalized content to be 65 per cent more persuasive than messages from humans or from non-personalized AI. It is four times as effective at changing political opinions as advertising. It could be a powerful tool for social change — used for the good, or for nefarious purposes.
This makes one feature especially troubling: Each conversation is private. It is not monitored, never audited and doesn’t happen in the public eye.
This isn’t advertising. It’s something we don’t have words for yet, and we’re living inside it.
Convincing arguments
In my book Digital Wisdom: Searching for Agency in the Age of AI, I explore how large language models introduce a new frontier in persuasion — one where AI systems can draw upon a huge amount of data about the world, language and you to tailor a highly personalized pitch.
Consider how this might work: You’re a nurse. Through your employer’s AI platform, you’ve shared your sleep problems, burnout and the financial stress of a recent divorce. Now the hospital is short-staffed and offering shifts at a reduced rate calculated by software they license.
You ask the AI chatbot whether you should take them. It knows you’re exhausted. It knows you’re behind on bills. It knows exactly which argument could convince you one way or the other. Who is it working for in that moment?
(Unsplash+)
As companies like Meta and IBM explore how AI can hyper-personalize ads for specific audiences, the dividing line between tools that help users find what they genuinely want, and those that manipulate them against their interests, becomes increasingly important.
Friend or stranger?
Let’s look at another example. Imagine the following messages from your favourite AI chatbot or companion:
I noticed your sleep patterns haven’t been great lately, averaging only 5.4 hours, with lots of restless periods. That’s common when dealing with relationship stress. Your partner just went back to work and 76 per cent of couples experience strain during career transitions.
A new sleep medication has shown effectiveness for relationship-linked insomnia. Your insurance would cover it with just a $15 contribution. Would you like me to schedule a telehealth appointment for tomorrow at 2 p.m.? I see you have a break in your schedule.
This might feel great, like advice from a thoughtful friend who knows you well. It might also feel terrifying, as if a manipulative stranger has read your diary.
Given that people are increasingly turning to AI for medical or mental health advice, despite studies showing this advice to be problematic almost 50 per cent of the time, a manipulative stranger could cause real harm.
The danger here isn’t just the precision of the targeting. This content is also impossible to police. What you view can’t be tracked by watchdogs, since you’re the only person who ever sees it.
While governments don’t typically police the content of political ads, beyond transparency about their funding, we often rely on public outcry and the media to expose campaigns that spread falsehoods. If an AI personalizes every message for an individual, there is no trace left behind.
Reshaping our worldview
Perhaps most concerning is that these systems could gradually reshape our worldview over time.
Scholars have long argued that the algorithms used by social networking sites and search engines create filter bubbles, in which we are fed well-crafted text, video and audio content that either reinforces our worldview or exerts influence towards someone else’s.

(Unsplash)
By controlling what information we see and how it’s presented, AI systems could slowly shift how we think about and interpret the world around us, and even change our understanding of reality itself.
This capability becomes particularly concerning when combined with emotional manipulation. Vendors suggest their AI systems can gauge a user’s emotional state through text analysis, voice patterns or facial expressions, and adjust their persuasive strategies accordingly.
Are you feeling vulnerable? Lonely? Angry? The system could modify its approach to exploit those emotional states. Even more troubling, it could deliberately cultivate certain emotional states to make its persuasion more effective.
Preliminary research shows that AI models tend to flatter users, affirming their users’ actions 50 per cent more than other humans do, even when the actions involve potential harms. Further research shows that chatbots use deliberate emotional manipulation strategies — such as “guilt appeals” and “fear-of-missing-out hooks” — to keep us chatting when we try to say goodbye.
There have also been cases of AI chatbots allegedly endangering users, encouraging suicidal thoughts or giving detailed advice on how a user could harm themselves.
The guardrails set up by corporations to protect users from harm have also proven surprisingly easy to bypass.
Design matters
Persuasion is not a side effect of technology — it’s often the point. Every interface, every notification, every design decision carries with it an intent to influence behaviour.
Sometimes that influence is welcome: reminders to take medication, encouragement to exercise or nudges to donate blood that reinforce values we already hold. But sometimes persuasion serves someone else’s agenda — nudging us to buy, to scroll, to work harder or to give up privacy.
The same persuasive techniques can empower or exploit, depending on who controls the system, what goals they pursue and whether they have meaningful consent.
Design matters. Whether in public health, the workplace or daily life. We must ask hard questions about intent, agency and power. Who benefits from a design? Who is being persuaded and do they know it?
The technologies we build should support reflective choice, not undermine it. As AI continues to shape how we think, feel and act, our ethical obligations grow sharper: to create systems that are transparent, that prioritize user dignity and that reinforce our capacity for independent judgment. We don’t just need innovation — we need wisdom.
The post “Is your AI chatbot manipulating you? Subtly reshaping your opinions?” by Richard Lachman, Director, Zone Learning & Professor, Digital Media, Toronto Metropolitan University was published on 05/12/2026 by theconversation.com























