Measles is back. In recent months, outbreaks have re-emerged across North America including 2,968 cases in Canada as of May 31, 2025. At the heart of many of these surges lies missed childhood vaccinations — not just because of access barriers, but also due to conversations that didn’t happen.
Many clinicians want to support their patients in making protective health decisions, but these are not simple conversations. Trust is essential, and clinicians need to accept that these may be complex discussions and learn how to build trust when medical misinformation and misunderstandings are in play.
These conversations are important, but clinicians’ and patients’ time together is often limited, and it’s hard to demonstrate trustworthiness and build trust. That’s where we believe — and evidence suggests — artificial intelligence (AI) can help.
THE CANADIAN PRESS/Geoff Robins
A surprising use for AI
AI is already being used to support diagnostic decisions and streamline administrative tasks in health care. But it also offers promise as a training tool for the human side of care.
We’re part of a team researching how chatbots can be developed to help clinicians practise difficult conversations about vaccines. These tools have the potential to provide low-cost, emotionally engaging and psychologically safe simulations for health professionals like doctors, nurse practitioners and pharmacists.
These kinds of tools are especially valuable in rural and remote areas, where access to in-person workshops or continuing education may be limited. Even for busy clinicians in well-resourced areas, chatbots can offer a flexible way to hone communication skills and to learn about circulating concerns.
Improving communication
Research consistently shows that clinicians can increase vaccine uptake by using better communication strategies. Even brief interventions — such as training in motivational interviewing — have measurable impacts on patient trust and behaviour.
Chatbots provide an opportunity to deliver this kind of training at scale. In recent work, computational social scientist David Rand and colleagues have demonstrated how AI-based agents can be trained to engage in social conversations and generate responses that effectively persuade.
These principles can be applied to the clinician–patient setting, allowing professionals to test and refine different ways of engaging with vaccine hesitancy before stepping into real-world conversations.
In research conducted in Hungary, clinicians reported feeling more confident and prepared after interacting with simulated patients. The opportunity to rehearse responses, receive feedback and explore multiple conversational pathways helped clinicians understand what to say — and how and when to say it.

(Shutterstock)
Practising communication
We believe chatbots can be used to train clinicians in a type of presumptive language known as the AIMS method (announce, inquire, mirror and secure trust). Similar approaches, drawing on motivational interviewing, have been tested in Québec, where it has demonstrated success in helping clinicians increase vaccine confidence and uptake among new parents.
This kind of intervention will simulate conversations with patients with vaccine questions, allowing physicians to practice AIMS techniques in a low-stakes environment. For example, the chatbot could play the role of a parent, and the physician would begin by announcing that it is time for the parents to vaccinate their children.
Then, if the “parent” (the chatbot) expresses vaccine hesitancy, the physician would inquire about what is driving the hesitancy. Importantly, when the “parent” responds to the questions, the AIMS approach teaches the physician not to respond directly to the concerns, but instead first mirror the response to show the parent that they are being heard and understood.
Finally, and sometimes after multiple rounds of inquiry and mirroring, the physician can move on to securing the parent’s trust.
Becoming adept at methods of conversational approaches like AIMS takes practice. That’s what a chatbot can offer: repeated, flexible, low-risk rehearsal. Think of it like a flight simulator for conversations.
Staying ahead of misinformation
The landscape of misinformation is constantly shifting. New conspiracy theories, viral videos and misleading anecdotes can gain traction in days. Clinicians shouldn’t have to confront these narratives for the first time during a brief patient visit.
By having the AI model underlying the chatbot constantly trawling the web for the latest misleading claims and updating chatbot scenarios regularly, we can help clinicians recognize and respond to the kinds of misinformation circulating now. This is especially important when trust in institutions is wavering and personalized, empathetic responses are most needed.
Conversations build trust
While we propose chatbots can be used to teach doctors how to address vaccine skepticism, motivational interviewing has already been employed via AI-based chatbots to address smoking cessation, with some promising results.
A similar approach has also been used to encourage the uptake of stress-reduction behaviours. Though the use of chatbots in education is a growing area of inquiry, the specific use of chatbots to train physicians in motivational interviewing approaches is a new field of study.
Using this approach as part of (continuing) clinical education could help better prepare the frontlines to serve as a successful bulwark against vaccine concerns not rooted in science.
In the face of falling vaccination rates and rising distrust, clinicians are on the front lines of public health. We owe them better tools to prepare and build trust.
Trust isn’t built in a moment. It’s built in conversation. And those can be practised.

The post “Chatbots can help clinicians become better communicators, and this could boost vaccine uptake” by Jaigris Hodson, Associate Professor of Interdisciplinary Studies, Royal Roads University was published on 06/10/2025 by theconversation.com