Back to all papers

Using Explainable AI to Characterize Features in the Mirai Mammographic Breast Cancer Risk Prediction Model.

Authors

Wang YK,Klanecek Z,Wagner T,Cockmartin L,Marshall N,Studen A,Jeraj R,Bosmans H

Affiliations (5)

  • Department of Imaging and Pathology, University Hospital Leuven, Herestraat 49, Box 7003, 3000 Leuven, Belgium.
  • Faculty of Mathematics and Physics, University of Ljubljana, Ljubljana, Slovenia.
  • Department of Radiology, University Hospital Leuven, Leuven, Belgium.
  • Jožef Stefan Institute, Ljubljana, Slovenia.
  • Department of Medical Physics, University of Wisconsin-Madison, Madison, Wis.

Abstract

<i>"Just Accepted" papers have undergone full peer review and have been accepted for publication in <i>Radiology: Artificial Intelligence</i>. This article will undergo copyediting, layout, and proof review before it is published in its final version. Please note that during production of the final copyedited article, errors may be discovered which could affect the content.</i> Purpose To evaluate whether features extracted by Mirai can be aligned with mammographic observations, and contribute meaningfully to the prediction. Materials and Methods This retrospective study examined the correlation of 512 Mirai features with mammographic observations in terms of receptive field and anatomic location. A total of 29,374 screening examinations with mammograms (10,415 women, mean age at examination 60 [SD: 11] years) from the EMBED Dataset (2013-2020) were used to evaluate feature importance using a feature-centric explainable AI pipeline. Risk prediction was evaluated using only calcification features (CalcMirai) or mass features (MassMirai) against Mirai. Performance was assessed in screening and screen-negative (time-to-cancer > 6 months) populations using the area under the receiver operating characteristic curve (AUC). Results Eighteen calcification features and 18 mass features were selected for CalcMirai and MassMirai, respectively. Both CalcMirai and MassMirai had lower performance than Mirai in lesion detection (screening population, 1-year AUC: Mirai, 0.81 [95% CI: 0.78, 0.84], CalcMirai, 0.76 [95% CI: 0.73, 0.80]; MassMirai, 0.74 [95% CI: 0.71, 0.78]; <i>P</i> values < 0.001). In risk prediction, there was no evidence of a difference in performance between CalcMirai and Mirai (screen-negative population, 5-year AUC: Mirai, 0.66 [95% CI: 0.63, 0.69], CalcMirai, 0.66 [95% CI: 0.64, 0.69]; <i>P</i> value: 0.71); however, MassMirai achieved lower performance than Mirai (AUC, 0.57 [95% CI: 0.54, 0.60]; <i>P</i> value < .001). Radiologist review of calcification features confirmed Mirai's use of benign calcification in risk prediction. Conclusion The explainable AI pipeline demonstrated that Mirai implicitly learned to identify mammographic lesion features, particularly calcifications, for lesion detection and risk prediction. ©RSNA, 2025.

Topics

Journal Article

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.