Explainable AI Outperforms Benchmarks in MRI Breast Cancer Detection

July 15, 2025

A novel explainable AI model accurately detects and localizes breast tumors on MRI, outperforming conventional models—especially in low-cancer-prevalence screening scenarios.

Key Details

  • The explainable fully convolutional data description (FCDD) model was trained and tested on 9,738 breast MRI exams from 2005-2022, plus an external multicenter dataset.
  • FCCD outperformed standard binary classification models, achieving AUCs up to 0.84 (balanced tasks) and 0.72 (imbalanced) vs. 0.81 and 0.69 for benchmarks (p<0.001).
  • In internal and external validation, FCCD consistently showed higher detection performance, e.g., AUC of 0.86 vs. 0.79 (external set).
  • The model achieved a specificity of 13% vs. 9% for the benchmark at 97% sensitivity in imbalanced (realistic) settings (p=0.02).
  • It produces interpretable heatmaps to highlight probable tumor areas, addressing the 'black box' issue in AI models.
  • Researchers note potential to streamline breast MRI screening, including use with abbreviated MRI protocols.

Why It Matters

Demonstrating robust detection of breast cancer in both high and low prevalence MRI datasets, this explainable AI system advances the field toward more transparent, generalizable tools for radiologists. Its validated, interpretable outputs may enhance clinical adoption and trust in AI-based breast imaging solutions.

Read more

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.