Back to all news

Deepfake X-rays Deceive Radiologists and AI Models in Recent Study

EurekAlertResearch
Deepfake X-rays Deceive Radiologists and AI Models in Recent Study

Radiologists and leading AI models struggle to distinguish AI-generated deepfake X-ray images from authentic radiographs, according to a recent Radiology journal study.

Key Details

  • 1Study involved 264 X-ray images, half real and half generated by AI models, including ChatGPT-4o and RoentGen.
  • 217 radiologists from six countries participated, with accuracy in identifying deepfake images ranging from 58% to 92%.
  • 3Four leading multimodal LLMs (GPT-4o, GPT-5, Gemini 2.5 Pro, Llama 4 Maverick) had detection accuracies between 52% and 89%.
  • 4No correlation was found between years of radiology experience and accuracy; musculoskeletal specialists performed better than others.
  • 5Common AI-deepfake X-ray features included overly smooth bones, symmetric lungs, and unnaturally straight spines.
  • 6Proposed safeguards include invisible watermarks and cryptographic image signatures.

Why It Matters

These findings expose high-stakes risks for clinical practice, legal proceedings, and hospital cybersecurity as synthetic images become increasingly authentic. Urgent adoption of digital safeguards and specialized training is needed to protect the integrity of medical imaging.

Ready to Sharpen Your Edge?

Subscribe to join 11k+ peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.