Much of the current conversation about AI assumes uptake is inevitable, more technology means better outcomes and the main task is managing risk.
But we asked Aboriginal and Torres Strait Islander people how they are encountering AI in their everyday lives, and a different picture started to emerge. Our Relational Futures project explores Indigenous sovereignty and the governance of AI.
Relational Futures positions AI not as a standalone tool, but as part of a wider system that shapes relationships between people, institutions, data and Country.
We have now reported our findings, and there are clear warnings about what happens when questions of accountability, harm and care are ignored. As one participant told us, AI comes with “no accountability, no checks and balances, no responsibility”.
Facing limited trust
In Australia, we have seen automated decision-making lead to devastating consequences, such as in Robodebt. Similar dynamics are emerging in aged care and in the National Disability Insurance Scheme.
These systems are often introduced in the name of efficiency. But efficiency for whom, and at what cost?
AI and automated systems do not enter neutral environments. They enter institutions that already have uneven distributions of power, trust and accountability. When things go wrong, the impacts are not evenly felt.
Our project set out to find the first qualitative baselines of Indigenous perspectives on AI, using surveys alongside yarning circles.
We wanted to centre Indigenous perspectives and understand more deeply how Indigenous peoples experience new technologies.
Our participants express limited trust in AI and, in many cases, a clear willingness to refuse using it. That refusal is not about rejecting technology outright. Participants recognised AI can intensify existing inequalities, particularly in sectors such as welfare, health and social services.
There is a strong awareness that automation can make decisions faster – but also harder to see, harder to question, and harder to hold accountable.
Understanding Indigenous data sovereignty
Indigenous data sovereignty centres collective rights and responsibilities in the governance of data. It affirms the authority of Indigenous peoples to control data relating to their communities, lands and resources across the full data lifecycle.
Such governance requires that data practices support self-determination, are grounded in community, and deliver collective benefit without reproducing harm or marginalisation. The participants in our research had a consistent emphasis on community benefit.
The risks identified by our participants go well beyond privacy or data breaches. They pointed to environmental costs, the appropriation and flattening of Indigenous knowledges, and the lack of transparency in how systems are built and deployed.
There is also a clear concern that AI will be used to fill gaps in under-resourced services.
One participant said:
There are times when AI doesn’t quite grasp the depth of First Nations experiences, cultural nuance or community dynamics. It can miss the emotional weight or the context, which reminds me that cultural authority must always sit with mob, not technology.
An ‘AI Elder’
The project also pushed into more speculative territory, asking people to think about what AI could be, not just what it is now. One of the ideas we tested was an “AI Elder”, who could work in areas like reconnecting to culture, or providing advice on cultural matters.
Relational Futures
We asked: what if AI was built around care, cultural knowledge, and responsibility to community, instead of speed and efficiency?
But the reaction of our participants was blunt. Who would that Elder speak for? Who would it answer to? How could it have any real relationship to community?
Elders aren’t just people who hold knowledge. They are part of community: they are trusted because of their relationships, their responsibilities, and their accountability over time.
AI can’t be in relationship in that way, can’t be held accountable, can’t carry obligation. It can’t stand in connection to Country or community.
Even when we try to imagine better versions of AI, there are some things that just don’t translate.
A way forward
AI governance cannot be limited to technical standards or compliance frameworks. It has to engage with authority, responsibility, harm and care.
If AI systems can be designed in ways that are safe, accountable and beneficial for Aboriginal and Torres Strait Islander peoples – who are often the most surveilled and marginalised within systems – they are far more likely to be safe and effective for everyone.
Designing for those at the margins is not a niche concern. It is a test of whether these systems work at all.
As one participant told us:
My biggest concern is that we get left behind. It’s easy to frame AI negatively, seeing it as a threat. It is just as easy to see the benefits it stands to offer. Clearly we need to be involved positively (we risk being left out otherwise) on how AI systems are designed, trained and used, otherwise there is a risk that existing power imbalances will be reproduced through technology.
Relational Futures offers both a warning and a way forward.
Without Indigenous leadership and relational approaches to governance, AI will continue to reproduce the kinds of harms already seen in systems like Robodebt. The way forward is less about slowing technology down, and more about rethinking what it is for, who it serves and how it is held to account.
The post “how Indigenous peoples think about AI” by Bronwyn Carlson, Professor, Critical Indigenous Studies and Director of The Centre for Global Indigenous Futures, Macquarie University was published on 04/22/2026 by theconversation.com






















