A large Mount Sinai study finds leading language models often accept and repeat fabricated medical claims disguised in clinical or social-media language.
Key Details
- 1Researchers analyzed over one million prompts across nine major language models for susceptibility to medical lies.
- 2Fabricated statements in realistic hospital notes were often accepted and repeated as true by the models.
- 3Study included scenarios from actual clinical notes, social media myths, and physician-validated fictional cases.
- 4Models failed to reliably flag unsafe or false recommendations when presented in confident medical language.
- 5Authors call for measurable safeguards and stress tests before embedding AI into clinical care tools.
Why It Matters

Source
EurekAlert
Related News

New Framework Compares AI Segmentation Without Ground Truth Annotations
Researchers introduce an open-source approach for evaluating AI anatomy segmentation models in medical imaging without requiring ground truth annotations.

AI-Driven Handheld Endomicroscope Enhances Early Cancer Detection
Researchers develop PrecisionView, a handheld AI-powered endomicroscope for real-time, high-resolution cancer diagnostics.

FDA Approves Johns Hopkins AI Tool for Early Sepsis Detection
FDA clears an AI-driven system developed by Johns Hopkins to detect sepsis up to 48 hours earlier and reduce mortality rates.