AI: An Integral Part of Our World — Graduates Must Learn Responsible Usage

AI: An Integral Part of Our World — Graduates Must Learn Responsible Usage

Artificial intelligence is rapidly becoming an everyday part of lives. Many of us use it without even realising, whether it be writing emails, finding a new TV show or managing smart devices in our homes.

It is also increasingly used in many professional contexts – from helping with recruitment to supporting health diagnoses and monitoring students’ progress in school.

But apart from a handful of computing-focused and other STEM programs, most Australian university students do not receive formal tuition in how to use AI critically, ethically or responsibly.

Here’s why this is a problem and what we can do instead.

AI use in unis so far

A growing number of Australian universities now allow students to use AI in certain assessments, provided the use is appropriately acknowledged.

But this does not teach students how these tools work or what responsible use involves.

Using AI is not as simple as typing questions into a chat function. There are widely recognised ethical issues around its use including bias and misinformation. Understanding these is essential for students to use AI responsibly in their working lives.

So all students should graduate with a basic understanding of AI, its limitations, the role of human judgement and what responsible use looks like in their particular field.

We need students to be aware of bias in AI systems. This includes how their own biases could shape how they use the AI (the questions they ask and how they interpret its output), alongside an understanding of the broader ethical implications of AI use.

For example, does the data and the AI tool protect people’s privacy? Has the AI made a mistake? And if so, whose responsibility is that?

What about AI ethics?

The technical side of AI is covered in many STEM degrees. These degrees, along with philosophy and psychology disciplines, may also examine ethical questions around AI. But these issues are not a part of mainstream university education.

This is a concern. When future lawyers use predictive AI to draft contracts, or business graduates use AI for hiring or marketing, they will need skills in ethical reasoning.

Ethical issues in these scenarios could include unfair bias, like AI recommending candidates based on gender or race. It could include issues relating to a lack of transparency, such as not knowing how an AI system made a legal decision. Students need to be able to spot and question these risks before they cause harm.

In healthcare, AI tools are already supporting diagnosis, patient triage and treatment decisions.

As AI becomes increasingly embedded in professional life, the cost of uncritical use also scales up, from biased outcomes to real-world harm.

For example, if a teacher relies on AI carelessly to draft a lesson plan, students might learn a version of history that is biased or just plain wrong. A lawyer who over-relies on AI could submit a flawed court document, putting their client’s case at risk.

How can we do this?

There are international examples we can follow. The University of Texas at Austin and University of Edinburgh both offer programs in ethics and AI. However, both of these are currently targeted at graduate students. The University of Texas program is focused on teaching STEM students about AI ethics, whereas the University of Edinburgh’s program has a broader, interdiscplinary focus.

Implementing AI ethics in Australian universities will require thoughtful curriculum reform. That means building interdisciplinary teaching teams that combine expertise from technology, law, ethics and the social sciences. It also means thinking seriously about how we engage students with this content through core modules, graduate capabilities or even mandatory training.

It will also require investment in academic staff development and new teaching resources that make these concepts accessible and relevant to different disciplines.

Government support is essential. Targeted grants, clear national policy direction, and nationally shared teaching resources could accelerate the shift. Policymakers could consider positioning universities as “ethical AI hubs”. This aligns with the government-commissioned 2024 Australian University Accord report, which called for building capacity to meet the demands of the digital era.

Today’s students are tomorrow’s decision-makers. If they don’t understand the risks of AI and its potential for error, bias or threats to privacy, we will all bear the consequences. Universities have a public responsibility to ensure graduates know how to use AI responsibly and understand why their choices matter.

The post “AI is now part of our world. Uni graduates should know how to use it responsibly” by Rachel Fitzgerald, Associate Professor and Deputy Associate Dean (Academic), Faculty of Business, Economics and Law, The University of Queensland was published on 07/17/2025 by theconversation.com