AI and radiologists differ in the types and patient characteristics of false-positive findings in digital breast tomosynthesis breast cancer screening.
Key Details
- 1Study included 2,977 women (average age 58) and 3,183 DBT exams (2013–2017) from UCLA.
- 2AI-only false positives mostly flagged benign calcifications (40%), while radiologists mostly flagged masses (47%).
- 3AI and radiologists had nearly identical false-positive rates: 9.7% (AI) vs. 9.5% (radiologists).
- 4Of 541 false-positive exams, 43% were AI-only, 44% were radiologist-only, and 13% were flagged by both.
- 5AI-only false positives occurred in older women (average 60 years), less often with dense breasts (24%), and more often with prior surgical history (37%).
- 6Concordant (AI-radiologist) flagged findings needing biopsy were high-risk in 44% of cases.
Why It Matters
Identifying how AI and radiologists differ in false-positive findings can inform the design of AI tools to improve screening specificity and reduce unnecessary recalls, directly impacting efficiency and patient care in breast imaging.

Source
AuntMinnie
Related News

•AuntMinnie
AI Advancements and Studies Highlighted in Digital X-Ray Insider
This edition covers AI models for fracture detection, mortality prediction, and more, along with new research using x-ray and DEXA modalities.

•AuntMinnie
AI Model Accurately Detects Pediatric Physeal Fractures on X-Ray
A deep learning model accurately identifies hard-to-detect physeal fractures in children's wrist x-rays.

•AuntMinnie
Adult-Trained Radiology AI Models Struggle in Pediatric Imaging
Adult-trained radiology AI models often underperform when applied to pediatric imaging data, according to a systematic review.