Across Canada, doctors and nurses are quietly using public artificial-intelligence (AI) tools like ChatGPT, Claude, Copilot and Gemini to write clinical notes, translate discharge summaries or summarize patient data. But even though these services offer speed and convenience, they also pose unseen cyber-risks when sensitive health information is no longer controlled by the hospital.
Emerging evidence suggests this behaviour is becoming more common. A recent ICT & Health Global article cited a BMJ Health & Care Informatics study showing that roughly one in five general practitioners in the United Kingdom reported using generative-AI tools such as ChatGPT to help draft clinical correspondence or notes.
While Canadian-specific data remain limited, anecdotal reports suggest that similar informal uses may be starting to appear in hospitals and clinics across the country.
(Unsplash/Solen Feyissa)
This phenomenon, known as shadow AI, refers to the use of AI systems without formal institutional approval or oversight. In health-care settings, it refers to well-intentioned clinicians entering patient details into public chatbots that process information on foreign servers. Once that data leaves a secure network, there is no guarantee where it goes, how long it is stored, or whether it may be reused to train commercial models.
A growing blind spot
Shadow AI has quickly become one of the most overlooked threats in digital health. A 2024 IBM Security report found that the global average cost of a data breach has climbed to nearly US$4.9 million, the highest on record. While most attention goes to ransomware or phishing, experts warn that insider and accidental leaks now account for a growing share of total breaches.
In Canada, the Insurance Bureau of Canada and the Canadian Centre for Cyber Security have both highlighted the rise of internal data exposure, where employees unintentionally release protected information. When those employees use unapproved AI systems, the line between human error and system vulnerability blurs.
Are any of these documented cases in health settings? While experts point to internal data exposure as a growing risk in health-care organizations, publicly documented cases where the root cause is shadow AI use remain rare. However, the risks are real.
Unlike malicious attacks, these leaks happen silently, when patient data is simply copy-and-pasted into a generative AI. No alarms sound, no firewalls are tripped, and no one realizes that confidential data has crossed national borders. This is how shadow AI can bypass every safeguard built into an organization’s network.
Why anonymization isn’t enough
Even if names and hospital numbers are removed, health information is rarely truly anonymous. Combining clinical details, timestamps and geographic clues can often allow re-identification. A study in Nature Communications showed that even large “de-identified” datasets can be matched to individuals with surprising accuracy when cross-referenced with other public information.
Public AI models further complicate the issue. Tools such as ChatGPT or Claude process inputs through cloud-based systems that may store or cache data temporarily.
While providers claim to remove sensitive content, each has its own data-retention policy and few disclose where those servers are physically located. For Canadian hospitals subject to the Personal Information Protection and Electronic Documents Act (PIPEDA) and provincial privacy laws, this creates a legal grey zone.

(Unsplash/Zulfugar Karimov)
Everyday examples hiding in plain sight
Consider a nurse using an online translator powered by generative AI to help a patient who speaks another language. The translation appears instant and accurate — yet the input text, which may include the patient’s diagnosis or test results, is sent to servers outside Canada.
Another example involves physicians using AI tools to draft patient follow-up letters or summarize clinical notes, unknowingly exposing confidential information in the process.
A recent Insurance Business Canada report warned that shadow AI could become “the next major blind spot” for insurers.
Because the practice is internal and voluntary, most organizations have no metrics to measure its scope. Hospitals that do not log AI usage cannot audit what data has left their systems or who sent it.
Bridging the gap between policy and practice
Canada’s health-care privacy framework was designed long before the arrival of generative AI. Laws like the PIPEDA and provincial health-information acts regulate how data is collected and stored but rarely mention machine-learning models or large-scale text generation.
As a result, hospitals are forced to interpret existing rules in a rapidly evolving technological environment. Cybersecurity specialists argue that health organizations need three layers of response:
1- AI-use disclosure in cybersecurity audits: Routine security assessments should include an inventory of all AI tools being used, sanctioned or otherwise. Treat generative-AI usage the same way organizations handle “bring-your-own-device” risks.
2- Certified “safe AI for health” gateways: Hospitals can offer approved, privacy-compliant AI systems that keep all processing within Canadian data centres. Centralizing access allows oversight without discouraging innovation.
3- Data-handling literacy for staff: Training should make clear what happens when data is entered into a public model and how even small fragments can compromise privacy. Awareness remains the strongest line of defence.
These steps won’t eliminate every risk, but they begin to align front-line practice with regulatory intent, protecting both patients and professionals.
The road ahead
The Canadian health-care sector is already under pressure from staffing shortages, cyberattacks and growing digital complexity. Generative AI offers welcome relief by automating documentation and translation, yet its unchecked use could erode public trust in medical data protection.
Policymakers now face a choice: either proactively govern AI use within health institutions or wait for the first major privacy scandal to force reform.
The solution is not to ban these tools but to integrate them safely. Building national standards for “AI-safe” data handling, similar to food-safety or infection-control protocols, would help ensure innovation doesn’t come at the expense of patient confidentiality.
Shadow AI isn’t a futuristic concept; it’s already embedded in daily clinical routines. Addressing it requires a co-ordinated effort across technology, policy and training, before Canada’s health-care system learns the hard way that the most dangerous cyber threats may come from within.
The post “How shadow AI could undermine Canada’s digital health defences” by Abbas Yazdinejad, Postdoctoral Research Fellow, Artificial Intelligence, University of Toronto was published on 11/18/2025 by theconversation.com




















