A large Mount Sinai study finds leading language models often accept and repeat fabricated medical claims disguised in clinical or social-media language.
Key Details
- 1Researchers analyzed over one million prompts across nine major language models for susceptibility to medical lies.
- 2Fabricated statements in realistic hospital notes were often accepted and repeated as true by the models.
- 3Study included scenarios from actual clinical notes, social media myths, and physician-validated fictional cases.
- 4Models failed to reliably flag unsafe or false recommendations when presented in confident medical language.
- 5Authors call for measurable safeguards and stress tests before embedding AI into clinical care tools.
Why It Matters

Source
EurekAlert
Related News

New VIS-Fb Nanobody Probes Transform High-Precision Cellular Imaging
Salk and Einstein researchers have developed visible-spectrum antigen-stabilizable fluorescent nanobodies (VIS-Fbs) for sharper, multi-color live-cell imaging with minimal background noise.

NIH-Backed AI Model Predicts Cancer Survival Using Single-Cell Data
Researchers have developed scSurvival, a machine learning tool that uses single-cell tumor data to accurately predict cancer patient survival and identify high-risk cell populations.

Deep Learning Pathomics Platform Improves Immunotherapy Prediction in Lung Cancer
A deep learning pathomics platform accurately predicts immunotherapy response in metastatic NSCLC using routine pathology slides.