A large Mount Sinai study finds leading language models often accept and repeat fabricated medical claims disguised in clinical or social-media language.
Key Details
- 1Researchers analyzed over one million prompts across nine major language models for susceptibility to medical lies.
- 2Fabricated statements in realistic hospital notes were often accepted and repeated as true by the models.
- 3Study included scenarios from actual clinical notes, social media myths, and physician-validated fictional cases.
- 4Models failed to reliably flag unsafe or false recommendations when presented in confident medical language.
- 5Authors call for measurable safeguards and stress tests before embedding AI into clinical care tools.
Why It Matters

Source
EurekAlert
Related News

NIH Invests Additional $12.6M in USC-Led Imaging AI for Alzheimer's
NIH has renewed and expanded its support for a USC-led consortium developing AI to decode and treat Alzheimer's using imaging and genomic data.

USC Unveils Joint Biomedical Engineering Department Bridging Medicine, Engineering, and Imaging
USC's medical and engineering schools launch a joint biomedical engineering department to accelerate interdisciplinary research and innovation, including imaging and AI.

AI Predicts Risks for Outpatient Stem Cell Therapy in Myeloma
Researchers use machine learning to predict adverse events during stem cell therapy for multiple myeloma, improving outpatient safety.