Chatbot Use Can Cause Mental Illness to Get Worse, Research Finds

Uncategorized

A new study found that chatbot use appeared to worsen symptoms of mental illness in people struggling with an array of conditions, adding to a rising consensus among medical experts that interacting with unregulated chatbots might steer some users into crisis.

The research, conducted by a team of psychiatrists at Denmark’s Aarhus University and published earlier this month in the journal Acta Psychiatrica Scandinavica, analyzed digital health records from roughly 54,000 Danish patients with diagnosed mental illnesses. After identifying 181 instances of patient notes containing mentions of AI chatbots, they determined that use of the bots — particularly intensive, prolonged use — appeared to deepen symptoms of mental illness in dozens of patients. They found that this pattern seemed to be especially true for patients prone to delusions or mania, and that the risks of chatbot use may be “severe or even fatal” for some.

This latest study was led by Dr. Søren Dinesen Østergaard, a Danish psychiatrist who, back in August 2023, predicted that human-like chatbots like ChatGPT could stand to reinforce delusions and hallucinations in people “prone to psychosis.” In a press release, Østergaard urged that while more research into causality is needed, he “would argue that we now know enough to say that use of AI chatbots is risky if you have a severe mental illness.”

“I would urge caution here,” said Østergaard.

Though limited to Denmark, the study’s findings add to a wave of public reporting and research about AI-linked mental health crises — sometimes referred to by mental health professionals as “AI psychosis” — in which bots like ChatGPT and others introduce, reinforce, or otherwise stoke delusional beliefs in users in ways that contribute to destructive mental spirals and real-world outcomes. Indeed, instead of nudging users away from delusional beliefs or potentially harmful fixations, previous studies show that chatbots tend to reinforce them — which is exactly what mental health professionals urge people not to do when communicating with someone who may be in crisis.

“AI chatbots have an inherent tendency to validate the user’s beliefs. It is obvious that this is highly problematic if a user already has a delusion or is in the process of developing one,” said Østergaard, adding that intensive chatbot use “appears to contribute significantly to the consolidation of, for example, grandiose delusions or paranoia.”

The Danish study found that in addition to deepening delusional beliefs, chatbots also appeared to worsen suicidal ideation and self-harm, disordered eating habits, depression, and obsessive or compulsive symptoms, among other symptoms of mental health issues.

The researchers did note that, out of the nearly 54,000 records they analyzed, they identified 32 cases in which patients’ use of chatbots for therapy or companionship appeared to be “constructive,” for example alleviating symptoms of loneliness or providing what patients found to be a helpful version of talk therapy. But while use of chatbots as a substitute for human therapists has proven to be an extremely common use case for chatbots, the study’s authors emphasized that AI therapy is still completely unregulated terrain.

As Futurism and others have reported, delusional spirals tied to extensive chatbot use — and the tangible consequences of these episodes, which range from divorce to job loss and financial distress, self-harm, stalking and harassment, hospitalization and jailing, and even death — have impacted people with known histories of serious mental illnesses and as well as those with no such background. The New York Times recently interviewed dozens of mental health professionals who reported that AI delusions are increasingly showing up in their practice.

OpenAI, meanwhile, is facing over a dozen lawsuits related to user safety and the possible psychological impacts of extensive ChatGPT use. One plaintiff, 34-year-old California man named John Jacquez, had been diagnosed with schizoaffective disorder — a condition that he worked to manage for years until ChatGPT sent him spiraling into a devastating psychosis, he claims in his lawsuit. In an interview, Jacquez told Futurism that had he been warned that ChatGPT could reinforce delusional thinking, he “never would’ve touched the program.”

“I didn’t see any warnings that it could be negative to mental health,” said Jacquez.

“I fear the problem is more common than most people think,” said Østergaard. “In our study, we are only seeing the tip of the iceberg, as we have only been able to identify cases that were described in the electronic health records.”

“There are likely far more,” he added, “that have gone undetected.”

More on AI delusions: AI Delusions Are Leading to Domestic Abuse, Harassment, and Stalking

The post Chatbot Use Can Cause Mental Illness to Get Worse, Research Finds appeared first on Futurism.

We are pleased to announce our partnership with Hunter Tylo.

Many of you will recognize her as the actress who stared in such daytime dramas as All My Children and The Bold and the Beautiful. PEOPLE Magazine twice named her one of the world’s 50 most beautiful people. She was also successful in suing Aaron Spelling over his firing her from Melrose Place for not aborting her child, a case which is widely recognized in supporting a Mother’s rights.

Hunter is coming onto TUC YouTube LIVE this Thursday at 4pm EST to discuss her experiences in Hollywood and why she left, choosing rather to pursue YASHA’UA and the Torah. As a member of our community, she has also opened up a channel at our TUC Discord to discuss a number of pressing issues, like narcissistic abuse.

Here is your TUC Discord invite link. https://discord.gg/zFPnExWT

Be sure to introduce yourself and then head right on over to her room, “Getting Real with Hunter”.

We hope our partnership with Tylo will be an ongoing one.