A novel explainable AI model accurately detects and localizes breast tumors on MRI, outperforming conventional models—especially in low-cancer-prevalence screening scenarios.
Key Details
- 1The explainable fully convolutional data description (FCDD) model was trained and tested on 9,738 breast MRI exams from 2005-2022, plus an external multicenter dataset.
- 2FCCD outperformed standard binary classification models, achieving AUCs up to 0.84 (balanced tasks) and 0.72 (imbalanced) vs. 0.81 and 0.69 for benchmarks (p<0.001).
- 3In internal and external validation, FCCD consistently showed higher detection performance, e.g., AUC of 0.86 vs. 0.79 (external set).
- 4The model achieved a specificity of 13% vs. 9% for the benchmark at 97% sensitivity in imbalanced (realistic) settings (p=0.02).
- 5It produces interpretable heatmaps to highlight probable tumor areas, addressing the 'black box' issue in AI models.
- 6Researchers note potential to streamline breast MRI screening, including use with abbreviated MRI protocols.
Why It Matters

Source
AuntMinnie
Related News

Comparing False-Positive Findings: AI vs. Radiologists in DBT Screening
AI and radiologists differ in the types and patient characteristics of false-positive findings in digital breast tomosynthesis breast cancer screening.

Google's Gemini Outperforms Providers in Communicating IR Procedures
Large language models like Google's Gemini demonstrate higher accuracy and greater empathy than human providers when answering patient questions about interventional radiology.

FDA Seeks Real-World Performance Insights on AI Medical Devices
FDA calls for healthcare worker feedback to enhance monitoring of AI-enabled medical devices in real-world settings.