Back to all news

Study Reveals LLMs Can Repeat Medical Misinformation From Clinical Notes

EurekAlertResearch

A large Mount Sinai study finds leading language models often accept and repeat fabricated medical claims disguised in clinical or social-media language.

Key Details

  • 1Researchers analyzed over one million prompts across nine major language models for susceptibility to medical lies.
  • 2Fabricated statements in realistic hospital notes were often accepted and repeated as true by the models.
  • 3Study included scenarios from actual clinical notes, social media myths, and physician-validated fictional cases.
  • 4Models failed to reliably flag unsafe or false recommendations when presented in confident medical language.
  • 5Authors call for measurable safeguards and stress tests before embedding AI into clinical care tools.

Why It Matters

As generative AI use expands in healthcare, this study exposes significant weaknesses in current safeguards against medical misinformation. For the radiology-AI community, it underscores the urgency of robust validation and fact-checking mechanisms before integrating LLM-powered tools into clinical workflows.

Ready to Sharpen Your Edge?

Subscribe to join 11k+ peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.