Mental health services around the world are stretched thinner than ever. Long wait times, barriers to accessing care and rising rates of depression and anxiety have made it harder for people to get timely help.
As a result, governments and healthcare providers are looking for new ways to address this problem. One emerging solution is the use of AI chatbots for mental health care.
A recent study explored whether a new type of AI chatbot, named Therabot, could treat people with mental illness effectively. The findings were promising: not only did participants with clinically significant symptoms of depression and anxiety benefit, those at high-risk for eating disorders also showed improvement. While early, this study may represent a pivotal moment in the integration of AI into mental health care.
AI mental health chatbots are not new – tools like Woebot and Wysa have already been released to the public and studied for years. These platforms follow rules based on a user’s input to produce a predefined approved response.
What makes Therabot different is that it uses generative AI – a technique where a program learns from existing data to create new content in response to a prompt. Consequently, Therabot can produce novel responses based on a user’s input like other popular chatbots such as ChatGPT, allowing for a more dynamic and personalised interaction.
This isn’t the first time generative AI has been examined in a mental health setting. In 2024, researchers in Portugal conducted a study where ChatGPT was offered as an additional component of treatment for psychiatric inpatients.
The research findings showed that just three to six sessions with ChatGPT led to a significantly greater improvement in quality of life than standard therapy, medication and other supportive treatments alone.
Together, these studies suggest that both general and specialised generative AI chatbots hold real potential for use in psychiatric care. But there are some serious limitations to keep in mind. For example, the ChatGPT study involved only 12 participants – far too few to draw firm conclusions.
In the Therabot study, participants were recruited through a Meta Ads campaign, likely skewing the sample toward tech-savvy people who may already be open to using AI. This could have inflated the chatbot’s effectiveness and engagement levels.
Ethics and Exclusion
Beyond methodological concerns, there are critical safety and ethical issues to address. One of the most pressing is whether generative AI could worsen symptoms in people with severe mental illnesses, particularly psychosis.
A 2023 article warned that generative AI’s lifelike responses, combined with the most people’s limited understanding of how these systems work, might feed into delusional thinking. Perhaps for this reason, both the Therabot and ChatGPT studies excluded participants with psychotic symptoms.
But excluding these people also raises questions of equity. People with severe mental illness often face cognitive challenges – such as disorganised thinking or poor attention – that might make it difficult to engage with digital tools.
Ironically, these are the people who may benefit the most from accessible, innovative interventions. If generative AI tools are only suitable for people with strong communication skills and high digital literacy, then their usefulness in clinical populations may be limited.
There’s also the possibility of AI “hallucinations” – a known flaw that occurs when a chatbot confidently makes things up – like inventing a source, quoting a nonexistent study, or giving an incorrect explanation. In the context of mental health, AI hallucinations aren’t just inconvenient, they can be dangerous.
Imagine a chatbot misinterpreting a prompt and validating someone’s plan to self-harm, or offering advice that unintentionally reinforces harmful behaviour. While the studies on Therabot and ChatGPT included safeguards – such as clinical oversight and professional input during development – many commercial AI mental health tools do not offer the same protections.
That’s what makes these early findings both exciting and cautionary. Yes, AI chatbots might offer a low-cost way to support more people at once, but only if we fully address their limitations.
Effective implementation will require more robust research with larger and more diverse populations, greater transparency about how models are trained and constant human oversight to ensure safety. Regulators must also step in to guide the ethical use of AI in clinical settings.
With careful, patient-centred research and strong guardrails in place, generative AI could become a valuable ally in addressing the global mental health crisis – but only if we move forward responsibly.

The post “AI therapy may help with mental health, but innovation should never outpace ethics” by Ben Bond, PhD Candidate in Digital Psychiatry, RCSI University of Medicine and Health Sciences was published on 05/06/2025 by theconversation.com