We Can’t Ban AI, but We Can Establish Safeguards to Keep It on Track

We Can’t Ban AI, but We Can Establish Safeguards to Keep It on Track

Artificial intelligence is fascinating, transformative and increasingly woven into how we learn, work and make decisions.

But for every example of innovation and efficiency — such as the custom AI assistant recently developed by an accounting professor at the Université du Québec à Montréal — there’s another that underscores the need for oversight, literacy and regulation that can keep pace with the technology and protect the public.

A recent case in Montréal illustrates this tension. A Québec man was fined $5,000 after submitting “cited expert quotes and jurisprudence that don’t exist” to defend himself in court. It was the first ruling of its kind in the province, though similar cases have occurred in other countries.

AI can democratize access to learning, knowledge and even justice. Yet without ethical guardrails, proper training, expertise and basic literacy, the very tools designed to empower people can just as easily undermine trust and backfire.

Why guardrails matter

Guardrails are the systems, norms and checks that ensure artificial intelligence is used safely, fairly and transparently. They allow innovation to flourish while preventing chaos and harm.

The European Union became the first major jurisdiction to adopt a comprehensive framework for regulating AI with the EU Artificial Intelligence Act, which came into force in August 2024. The law divides AI systems into risk-based categories and rolls out rules in phases to give organizations time to prepare for compliance.

European Union lawmakers vote on an Artificial Intelligence Act at the European Parliament on March 13, 2024 in Strasbourg, France.
(AP Photo/Jean-Francois Badias)

The act makes some uses of AI unacceptable. These include social scoring and real-time facial recognition in public spaces, which were banned in February.

High-risk AI used in critical areas like education, hiring, health care or policing will be subject to strict requirements. Starting in August 2026, these systems must meet standards for data quality, transparency and human oversight.

General-purpose AI models became subject to regulatory requirements in August 2025. Limited-risk systems, such as chatbots, must disclose that users are interacting with an algorithm.

The key principle is the higher the potential impact on rights or safety, the stronger the obligations. The goal is not to slow innovation, but to make it accountable.

Critically, the act also requires each EU member state to establish at least one operational regulatory sandbox. These are controlled frameworks where companies can develop, train and test AI systems under supervision before full deployment.

For small and medium-sized enterprises that lack resources for extensive compliance infrastructure, sandboxes provide a pathway to innovate while building capacity.

Canada is still catching up on AI

Canada has yet to establish a comprehensive legal framework for AI. The Artificial Intelligence and Data Act was introduced in 2022 as part of Bill C-27, a package known as the Digital Charter Implementation Act. It was meant to create a legal framework for responsible AI development, but the bill was never passed.

Canada now needs to act quickly to rectify this. This includes strengthening AI governance, investing in public and professional education and ensuring a diverse range of voices — educators, ethicists, labour experts and civil society — are involved in shaping AI legislation.

A phased approach similar to the EU’s framework could provide certainty while supporting innovation. The highest-risk applications would be banned immediately, while others face progressively stricter requirements, giving businesses time to adapt.

A man sits on a stage under a screen that says 'All in'
Minister of Artificial Intelligence and Digital Innovation Evan Solomon gives remarks during the All In AI conference in Montréal on Sept., 25, 2025.
THE CANADIAN PRESS/Christopher Katsarov

Regulatory sandboxes could help small and medium-sized enterprises innovate responsibly while building much needed capacity in the face of ongoing labour shortages.

The federal government recently launched the AI Strategy Task Force to help accelerate the country’s adoption of the technology. It is expected to deliver recommendations on competitiveness, productivity, education, labour and ethics in a matter of months.

But as several experts have pointed out, the task force is heavily weighted toward industry voices, risking a narrow view on AI’s societal impacts.

Guardrails alone aren’t enough

Regulations can set boundaries and protect people from harm, but guardrails alone aren’t enough. The other vital foundation of an ethical and inclusive AI society is literacy and skills development.

AI literacy underpins our ability to question AI tools and content, and it is fast becoming a basic requirement in most jobs.

Yet, nearly half of employees using AI tools at work received no training, and over one-third had only minimal guidance from their employers. Fewer than one in 10 small or medium-sized enterprises offer formal AI training programs.

As a result, adoption is happening informally and often without oversight, leaving workers and organizations exposed.

AI literacy operates on three levels. At its base, it means understanding what AI is, how it works and when to question its outputs, including awareness of bias, privacy and data sources. Mid-level literacy involves using generative tools such as ChatGPT or Copilot. At the top are advanced skills, where people design algorithms with fairness, transparency and accountability in mind.

Catching up on AI literacy means investing in upskilling and reskilling that combines critical thinking with hands-on AI use.

As a university lecturer, I often see AI framed mainly as a cheating risk, rather than as a tool students must learn to use responsibly. While it can certainly be misused, educators must protect academic integrity while preparing students to work alongside these systems.

Balancing innovation with responsibility

We cannot ban or ignore AI, but neither can we let the race for efficiency outpace our ability to manage its consequences or address questions of fairness, accountability and trust.

Skills development and guardrails must advance together. Canada needs diverse voices at the table, real investment to match its ambitions and strong accountability built into any AI laws, standards and protections.

More AI tools will be designed to support learning and work, and more costly mistakes will emerge from blind trust in systems we don’t fully understand. The question is not whether AI will proliferate, but whether we’ll build the guardrails and literacy necessary to accommodate it.

AI can become a complement to expertise, but it cannot be a replacement for it. As the technology evolves, so too must our capacity to understand it, question it and guide it toward public good.

We need to pair innovation with ethics, speed with reflection and excitement with education. Guardrails and skills development, including basic AI literacy, are not opposing forces; they are the two hands that will support progress.

The post “We can’t ban AI, but we can build the guardrails to prevent it from going off the tracks” by Simon Blanchette, Lecturer, Desautels Faculty of Management, McGill University was published on 11/20/2025 by theconversation.com