Back to all news

Deepfake X-rays Fool Radiologists and AI, Raising Security Concerns

Deepfake X-rays Fool Radiologists and AI, Raising Security Concerns

Both radiologists and AI models struggle to differentiate between authentic and AI-generated ('deepfake') radiographic images, raising major security and clinical concerns.

Key Details

  • 1Research published in RSNA's Radiology shows deepfake X-rays are highly convincing, deceiving even expert radiologists.
  • 217 radiologists from 12 centers across 6 countries participated in the study.
  • 3Images included AI-generated X-rays (from GPT-4o or RoentGen) mixed with real exams.
  • 4Even when aware of the presence of fakes, radiologists could not reliably distinguish between authentic and fake images.
  • 5The risk includes fraudulent litigation and potential for clinical harm if synthetic images are injected into hospital records.

Why It Matters

This highlights a major vulnerability in digital radiology workflows, with implications for cybersecurity, clinical accuracy, and legal liability. Increased vigilance and improved detection methods are urgently needed to safeguard medical imaging integrity.
Radiology Business

Source

Radiology Business

View all from this source

Ready to Sharpen Your Edge?

Subscribe to join 11k+ peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.