Do Chatbots Make Good Therapists?
Recently, a manager at OpenAI, an artificial intelligence company, stirred up concern when she expressed having a deeply emotional and personal discussion with ChatGPT, their widely recognized chatbot.
“I’ve never tried therapy before, but this is probably it?” Lilian Weng posted on X, formerly Twitter, sparking a flood of negative comments accusing her of downplaying mental illness.
However, Weng’s view of his interactions with ChatGPT may be explained by a version of the placebo effect presented in a study published this week in the journal Nature Machine Intelligence.
A team from the Massachusetts Institute of Technology (MIT) and Arizona State University asked more than 300 participants to interact with mental health AI programs and told them what to expect.
Some were told that the chatbot was empathetic, others that it was manipulative, and a third group that it was neutral.
Those who were told they were talking to a caring chatbot were significantly more likely than the other groups to find their chatbot therapists trustworthy.
“What we see from this study is that the AI is to some extent the AI of the viewer,” said Pat Pataranutaporn, co-author of the report.
Busy startups have been working on AI apps that provide therapy, companionship and other mental health support for years — and it’s big business.
But the field remains a lightning rod for controversy.
We are now on WhatsApp. Click to join.
Strange, empty’
As with all other sectors that AI threatens to disrupt, critics worry that robots will eventually replace human workers rather than complement them.
And when it comes to mental health, the concern is that robots are unlikely to do a good job.
“Therapy is for mental well-being and it’s hard work,” activist and programmer Cher Scarlett wrote in response to Weng’s first post on X.
“Vibrating yourself is good and all, but it’s not the same.”
Public fear of artificial intelligence is increasing, but some mental health applications have a checkered recent history.
Users of Replika, a popular artificial intelligence companion that is sometimes marketed as offering mental health benefits, have long complained that the robot can be sexist and violent.
Separately, a US-based non-profit called Koko conducted an experiment in February with 4,000 clients offered counseling using the GPT-3 and found that the automated responses simply did not work as therapy.
“Simulated empathy feels weird, empty,” company co-founder Rob Morris wrote to X.
His findings were similar to those of the MIT/Arizona researchers, who said some participants compared the chatbot experience to “talking to a brick wall.”
But Morris was later forced to defend himself after widespread criticism of his experiment, largely because it was unclear whether his clients were aware of their involvement.
“Lower Expectations”
David Shaw of the University of Basel, who was not involved in the MIT/Arizona study, told AFP the findings were not surprising.
But he noted, “It appears that not a single participant told all the chatbots shit.”
He said it might be the most accurate primer of all.
However, the chatbot-as-therapist idea is intertwined with technology’s 1960s roots.
ELIZA, the first chatbot, was developed to simulate a type of psychotherapy.
MIT/Arizona researchers used ELIZA on half of the participants and GPT-3 on the other half.
Although the effect was much stronger with GPT-3, ELIZA was still considered reliable by users with a preference for positivity.
So it’s no surprise that Weng loves his interactions with ChatGPT—he works for the company that makes it.
The MIT/Arizona researchers said that society needs to get a handle on the AI stories.
“The way AI is presented to society matters because it changes how AI is experienced,” the paper argued.
“It may be desirable for the user to have lower or more negative expectations.”
One more thing! We are now on WhatsApp channels! Follow us there to never miss an update from the tech world. If you want to follow ReturnByte channel on WhatsApp, click here to join now!