
Both radiologists and AI models struggle to differentiate between authentic and AI-generated ('deepfake') radiographic images, raising major security and clinical concerns.
Key Details
- 1Research published in RSNA's Radiology shows deepfake X-rays are highly convincing, deceiving even expert radiologists.
- 217 radiologists from 12 centers across 6 countries participated in the study.
- 3Images included AI-generated X-rays (from GPT-4o or RoentGen) mixed with real exams.
- 4Even when aware of the presence of fakes, radiologists could not reliably distinguish between authentic and fake images.
- 5The risk includes fraudulent litigation and potential for clinical harm if synthetic images are injected into hospital records.
Why It Matters

Source
Radiology Business
Related News

Study Highlights Limitations of AI in Prostate MRI Screening
New research points to several shortcomings in implementing AI for MRI-based prostate cancer screening.

Deep Learning Model Predicts Brain Tumor MRI Enhancement Without Gadolinium
German researchers developed a deep learning approach to predict MRI contrast enhancement in brain tumors without the need for gadolinium-based agents.

Stanford Study: LLM-Generated Hospital Notes Safe, Aid Physician Wellbeing
Stanford research shows agentic LLMs can safely draft hospital discharge summaries, reducing physician burnout with minimal risk of patient harm.