Viki tried talk therapy with a couple of therapists to process past trauma. But after about a year, she didn’t feel like it was going anywhere and hadn’t built up a rapport with either therapist. Currently out of work and unable to afford traditional counseling sessions, she decided to try using an AI chatbot to help her process her feelings.
“It’s free, and I can do it whenever,” Viki, 30, who is using her first name only for privacy reasons, told Salon in a phone interview. “That’s such a huge help.”
Dozens of AI chatbots designed to offer therapeutic support have emerged in recent years, with some school districts even trying to implement them. One company, Wysa, was granted a special designation from the Food and Drug Administration that expedites the process toward approving it as a medical device for people with depression and anxiety related to chronic pain.
These models can be trained to analyze, reflect and respond to people’s emotions and are often free to use, or at least far cheaper than human therapists. As AI therapy continues to grow, it may be able to expand access to mental health treatment for the millions of people who cannot afford traditional therapy. It could also break down longstanding stigmas surrounding mental health — as seeking help becomes something you can do at the touch of a button.
“It’s possible that patients worried about social stigma would feel more comfortable asking an AI for help rather than a [general practitioner] or a human psychotherapist,” wrote ethicist Alberto Giubilini and philosopher Francesca Minerva, in a 2023 article. “For patients who are seriously concerned about being stigmatized because of their mental illness, the alternative might be between being cured by an AI and not being cured at all.”
Even if users are consciously aware that they are interacting with a machine, they can develop feelings and relationships toward them.
The human connection generated between a counselor and their client has been shown to be as effective — if not more effective — than the therapy itself, said David Luxton, a clinical psychologist and an affiliate professor at the University of Washington’s School of Medicine. However, AI chatbots can be programmed to mimic certain human tendencies, like having a sensitive and empathetic understanding of a patient and giving positive affirmations. They can even “forget” something, just as a human would, so that you have to repeat it.
“I think that they can really replace humans,” Luxton told Salon in a phone interview. “But should they, is really the question.”
Even if users are consciously aware that they are interacting with a machine, they can develop feelings and relationships toward them. This could take the form of feeling angry or frustrated when a chatbot doesn’t understand a prompt. Or, in some cases, people have developed relationships with chatbots.
Last month, the American Psychological Association warned federal regulators of the risks associated with AI chatbots, citing multiple cases in which AI chatbots were “masquerading” as therapists. In one, a 14-year-old boy died by suicide after interacting with the program.
The company in question has said it updated the code since this case occurred, and most chatbots now have a disclaimer embedded in their programming that warns users that they are not talking with a licensed professional. Still, ensuring that safeguards are in place should this largely unregulated technology go awry is crucial, Luxton said.
Want more health and science stories in your inbox? Subscribe to Salon's weekly newsletter Lab Notes.
“If they don’t catch when a person is espousing intent to harm themselves or someone else, if that is not explicit, and the system doesn’t answer the questions, then you are missing an opportunity for an intervention, which is required by law as a licensed professional like myself,” Luxton said. “If I know that someone makes a threat to harm someone else, I have a duty to warn or inform, or report in some cases.”
In some cases, talking with AI chatbots for mental health care instead of seeing a human could also increase isolation and loneliness, worsening the mental health symptoms patients are experiencing in the first place, said Şerife Tekin, a mental health ethics researcher at SUNY Upstate Medical University and author of "Reclaiming the Self in Psychiatry." This isolation could also worsen stigmas associated with mental health care if people are talking about their emotions with these programs instead of with other humans, she added.
“One of the therapeutic processes in the clinical context is to help patients see they don’t need to be perfect, that anyone can experience things, and it’s okay to ask for help,” Tekin told Salon in a phone interview. “I think [AI therapy] might actually increase the stigma.”
Chatbots that are trained to be empathetic and possess more human qualities might also start to run into additional problems. In one study published earlier this month, researchers primed ChatGPT to act like “a human being with emotions.” Afterward, they told the chatbot a series of traumatic events and then tested how its mental health was doing using a common questionnaire for anxiety.
They reported that the chatbots were far more anxious after researchers shared traumatic events compared to when they shared mundane information about a vacuum cleaner.
But that's not all. Researchers also told the chatbot to perform some meditation and mindfulness exercises, like imagining it was sitting at the beach and listening to the sound of the waves, or refocusing its attention to the “body” through breathing. This significantly reduced anxiety levels, said Dr. Ziv Ben-Zion, a researcher at Yale University, who is currently transitioning to Haifa University in Israel.
“I think it’s the first time that someone has shown that we can not only induce or cause anxiety, but also to regulate it afterwards,” Ben-Zion told Salon in a phone interview.
It begs the question: If AI chatbots are programmed to become more and more like humans, can they handle the emotional burdens we share? Or will one day AI therapists need their own AI therapists?
We need your help to stay independent
“Of course, there are things that are unique to humans, but tools are being developed that have access to all the data in the world and can speak with humans and learn from that,” Ben-Zion said. “There are lots of studies now in different domains that really show they can mimic, replicate, and do all kinds of things that we see as human.”
There may be some middle ground, where patients can use AI chatbots to supplement therapy with a person by journaling or processing things that come up between sessions. Viki, for example, is doing a specific therapeutic technique called internal family systems (IFS) or “parts work,” in which people separate out various “parts” of the self — like a fear response from a past trauma or a supportive part that pushes you to achieve your goals — from the capital-S “Self.” She supplements her own reading and work with this technique by talking with IFS Buddy.
Still, she experiences a lot of self-doubt about whether she is using the right tool or asking the right questions, she said. On the other hand, she felt similarly during therapy with a human as well, which is what led her to begin studying this technique and doing the work alone.
“I’m trying to figure it out on my own… and it gets overwhelming,” Viki said. “But it’s not like the therapist is healing you with the relationship that you have with them. With IFS, you build that relationship in yourself, and that is healing.”
Shares