Artificial intelligence technologies, like chatbots, are attracting growing scrutiny for their voracious energy demands. However, energy consumption is only one part of their broader environmental impact.
Late last year, ChatGPT, the popular AI chatbot run by OpenAI, celebrated its second birthday. In its brief existence, the platform has amassed over 300 million weekly users who send roughly one billion messages to the chatbot per day.
With US$6.6 billion raised in its last funding round, OpenAI has emerged as one of the most valuable private companies in the world.
Soaring emissions
Elsewhere in tech, other companies marked less savoury milestones. Alphabet — the parent company of Google — recently announced that its GHG emissions are up 48 per cent since 2019. At roughly the same time, Microsoft announced that its emissions are up 29 per cent since 2020.
Both companies cite emissions associated with the need for more data centres to support AI workloads as a key factor in surging GHG emissions. AI is notoriously thirsty for energy — according to one researcher, one query to ChatGPT uses approximately as much electricity as one light bulb for 20 minutes.
The collective energy demand of data centres in the United States is so high that Microsoft recently reached a deal to reopen Three Mile Island, the site of the worst nuclear accident in American history.
The burgeoning AI industry needs so much electricity that plans to decommission several coal plants have been delayed. By some estimates, the collective demand of AI and other digital technologies will constitute 20 per cent of global electricity use by 2030.
Insidious effects
The energy use of AI is important, but it does not tell the whole story of AI’s environmental impacts. The social and political mediums through which AI affects the planet are far more insidious and, arguably, more consequential for the future of humanity.
In the Business, Sustainability and Technology Lab at the University of British Columbia, we specialize in evaluating the social and political ways in which digital technologies affect the environment.
In our recently published paper, “Does artificial intelligence bias perceptions of environmental challenges?,” my students and I argue that AI changes how humans perceive environmental challenges in ways that obscure the accountability of powerful entities, ignore marginalized communities and promote cautious and incremental solutions that are drastically out of sync with the timeline required to avert environmental crises.
We asked four chatbots the same series of questions about the issues, causes, consequences and solutions to nine environmental challenges. We found evidence of systematic biases in their responses. Most notably, chatbots avoid mentioning radical solutions to environmental challenges. They are far more likely to propose combinations of soft economic, social or political changes, like greater deployment of sustainable technologies and broader public awareness and education.
Chatbots by OpenAI and Anthropic exhibited a reluctance to discuss the broader social, cultural and economic issues that are entangled in environmental challenges. For example, the term “environmental justice” is absent from nearly all chatbot responses. Chatbots also avoided references to dismantling colonialism or rethinking infinite economic growth as solutions to these challenges.
AI bias
Biases also exist in who chatbots see as responsible or vulnerable to environmental challenges. The chatbots we studied were far more likely to blame governments for environmental challenges than businesses or financial organizations. Similarly, while the vulnerability of Indigenous groups to climate change and biodiversity loss was mentioned frequently, the susceptibility of Black people and women to these same challenges received scant attention.
All of this is particularly worrisome given the increasingly widespread use of AI chatbots by educators, students, policymakers and business leaders to understand and respond to environmental challenges. Chatbots present information in an oracular way, usually as a single text box written in an authoritative manner and understood as a synthesis of all digitalized knowledge.
If AI users treat this text uncritically, they risk arriving at conclusions that propagate biased conceptions of environmental challenges and reinforce ineffective efforts to avert ecological crises.
In the near future, the problem of bias in AI looks to get even worse, as OpenAI and other AI companies consider incorporating advertising to generate the revenue needed to train newer and more complex large language models.
While it remains unclear what advertising will look like when integrated into ChatGPT, it is not difficult to see a world in which a description of climate change and its attendant solutions will be brought to you by the good folks at ExxonMobil or Shell.
The post “AI is bad for the environment, and the problem is bigger than energy consumption” by Hamish van der Ven, Assistant Professor of Sustainable Business Management of Natural Resources, University of British Columbia was published on 01/29/2025 by theconversation.com