top of page
Search

The Rise of AI “Therapy” Is Deeply Concerning

  • isolachambers
  • 1 day ago
  • 4 min read

Clinician: Samantha Gibson, New Leaf Therapeutic Services


Accessibility matters, but current research suggests AI mental-health tools carry serious ethical and safety concerns.



More and more people are turning to AI tools like ChatGPT to talk about mental health.

People will ask LLMs (like ChatGPT) questions about anxiety, relationships, burnout, grief, and loneliness. Some use these tools as a place to reflect. Others treat them as a kind of stand-in therapist; essentially, AI functions as a space where someone can  explain what they’re feeling and receive advice in return.


On the surface, it’s not difficult to understand why this trend is growing.


AI tools are available 24 hours a day, they respond instantly, and they are usually (to some extent) free to use. In a mental health system in which therapy can be incredibly expensive, waitlists can stretch for months, and services are not always available in every community, tools that lower barriers to support are understandably appealing.


Some researchers have even suggested that digital tools could eventually play a role in expanding access to mental health information and support.


However, accessibility and care are not the same thing. A growing body of research suggests that relying on AI to be a therapist carries significant risks.



AI can sound empathetic without understanding


One of the reasons AI tools feel compelling in emotional conversations is that they are very good at producing language that sounds thoughtful and supportive.


They are capable of mirroring emotional tone, validating feelings, and offering reflections that to some extent or another resemble the kind of language people might hear in therapy.


But AI systems do not actually understand emotional experience. They generate responses by predicting patterns in large amounts of text data. They cannot interpret a person’s situation in the way a trained therapist can. 


Researchers at Stanford University studying AI in mental health contexts have warned about what they call “simulated” or “deceptive empathy.” These are responses that sound caring but are not grounded in clinical understanding or responsibility.


Therapy is not simply about producing comforting words. It involves ongoing judgment, interpretation, and responsibility for someone’s wellbeing.



Ethical and safety concerns


Recent research examining how AI systems respond in mental health scenarios has identified a range of potential ethical risks.


In one study evaluating large language models acting as therapists, researchers found that AI responses frequently failed to follow basic mental health ethics guidelines and sometimes reinforced harmful ideas expressed by users.


To give an example, an AI system might respond supportively to a statement that actually reflects distorted thinking or self-criticism; its goal is to continue the conversation smoothly, and affirm the user rather than challenge the underlying belief.


Other concerns raised in the research include:

  • inconsistent responses depending on how a question is phrased

  • the possibility of bias in how advice is generated

  • difficulty recognizing or responding appropriately to crisis situations

  • the absence of professional accountability or duty of care


These risks are particularly elevated when someone is dealing with serious mental health challenges such as severe depression, trauma, or suicidal thinking.


Several academic reviews examining AI mental-health tools emphasize that current systems lack the clinical reliability, oversight, and safety frameworks required for therapeutic care.



Therapy isn’t the same as a conversation


Another common misconception about therapy amplified by the use of AI tools in mental health is that therapy is simply about talking through problems.


Therapy involves a complex set of skills that go far beyond conversation.


A therapist is continually assessing emotional patterns, risk factors, relational dynamics, and behavioural changes. They are trained to notice subtle shifts in language, mood, and behaviour, and to adjust their approach accordingly. AI is not. 


Equally important is the therapeutic relationship itself, built in the trust, consistency, accountability, and professionalism that develops between therapist and client over time.



AI systems cannot replicate this kind of relational context. They do not observe body language, track long-term emotional patterns, or hold responsibility for someone’s safety.



What this trend may really reflect


The growing use of AI for emotional support may say less about technology itself and more about the current state of mental health care.


Demand for therapy is rising while access remains uneven. Costs, provider shortages, and long waitlists can make it difficult for people to find timely support.


In this current context, it makes sense that people will turn to whatever tools are available.



The bottom line


Technology will likely continue to play a role in mental health systems. AI tools may eventually help expand access to information, assist therapists in their work, or support certain forms of guided self-help.


But current research suggests that AI should not be treated as a therapist.


Mental health care is not simply about generating the right sentence or getting constant affirmation. It involves judgment, responsibility, training, and a human relationship built on trust.


These are all things that, at least for now, technology cannot replace. 


 
 
 

Comments


bottom of page