Millions of Americans Are Talking to AI Instead of Going to the Doctor, and It’s Giving Them Horrendously Flawed Medical Advice

Uncategorized

While Google’s AI may no longer recommend eating rocks or confidently telling users to put glue on their pizza, even cutting-edge AI chatbots remain staggeringly incompetent at dispensing medical advice.

In a new study published this week in the journal JAMA Network Open, researchers asked 21 frontier large language models (LLMs) to “play doctor” when confronted with realistic symptoms that an actual patient could feasibly ask about.

The results painted a damning picture. The AIs’ failure rates exceeded 80 percent when provided with given ambiguous symptoms that could match more than one condition, and for more straightforward cases that included including physical exam findings and lab results, they still failed 40 percent of the time. The researchers also found that unlike human clinicians, the “LLMs collapse prematurely onto single answers,” resulting in “weak performance” across all models.

“Despite continued improvements, off-the-shelf large language models are not ready for unsupervised clinical-grade deployment,” said corresponding author and Massachusetts General Hospital associate chair of innovation and commercialization Marc Succi in a statement. “Differential diagnoses are central to clinical reasoning and underlie the ‘art of medicine’ that AI cannot currently replicate,” he added.

Translated into the real world, an AI that leaps to conclusions when not represented with the full picture could have devastating consequences. Say, if a person were to ask a chatbot about a rash or a sudden onset cough, they may be presented with misleading information and potentially dangerous advice.

The results highlight the considerable risks of relying on AI for live-or-die health advice, a worrying trend that’s already playing out across the country. As a recent survey by the West Health-Gallup Center on Healthcare in America found, one in four American adults — the equivalent of 66 million people — are already asking ChatGPT and other chatbots like it for medical advice.

Respondents often said they were seeking information both before and after seeing a healthcare professional. In many cases, they’re foregoing seeking real-world medical assistance entirely after talking to a chatbot. Among those who asked AI for health advice, 14 percent — the equivalent of over nine million Americans — said they never saw a provider they would’ve otherwise seen if it weren’t for the tech.

According to the survey, 27 percent said they didn’t want to pay for a doctor’s visit as a reason for consulting AI, while 14 percent said they were unable to pay for one. Some participants said they didn’t have time or ability to visit a doctor.

“Artificial intelligence is already reshaping how Americans seek health information, make decisions and engage with providers, and health systems must keep pace,” said West Health Policy Center president Tim Lash in a statement.

Taken together, the two studies paint a damning picture of the current healthcare landscape in the US. Not only are millions of Americans heavily relying on AI tools, they’re frequently being presented with flawed advice by hallucinating LLMs — and choosing not to seek help from far more knowledgeable professionals.

AI have already caught a large amount of flak from experts for doling out bad medical advice, from Google’s AI Overviews giving dangerously inaccurate or out of context information to transcription tools used by doctors inventing nonexistent medications.

Even if the information they’re giving is wrong, AI is giving patients a sense of certainty. Almost half of respondents in the latest survey said that talking to a chatbot about medical problems had made them feel more confident when talking to a provider, 22 percent said it helped them identify issues earlier, and 19 percent said it allowed them to avoid unnecessary tests or procedures.

At the same time, many Americans remain highly skeptical of AI’s medical advice. Roughly a third of participants who said they consulted AI for health issues said they distrusted the tool. One in ten respondents said the AI gave them potentially unsafe advice.

One thing’s for sure: the AI industry is in dire need of regulatory oversight.

More on AI and medical advice: Frontier AI Models Are Doing Something Absolutely Bizarre When Asked to Diagnose Medical X-Rays

The post Millions of Americans Are Talking to AI Instead of Going to the Doctor, and It’s Giving Them Horrendously Flawed Medical Advice appeared first on Futurism.

We are pleased to announce our partnership with Hunter Tylo.

Many of you will recognize her as the actress who stared in such daytime dramas as All My Children and The Bold and the Beautiful. PEOPLE Magazine twice named her one of the world’s 50 most beautiful people. She was also successful in suing Aaron Spelling over his firing her from Melrose Place for not aborting her child, a case which is widely recognized in supporting a Mother’s rights.

Hunter is coming onto TUC YouTube LIVE this Thursday at 4pm EST to discuss her experiences in Hollywood and why she left, choosing rather to pursue YASHA’UA and the Torah. As a member of our community, she has also opened up a channel at our TUC Discord to discuss a number of pressing issues, like narcissistic abuse.

Here is your TUC Discord invite link. https://discord.gg/zFPnExWT

Be sure to introduce yourself and then head right on over to her room, “Getting Real with Hunter”.

We hope our partnership with Tylo will be an ongoing one.