Back to all papers

Breast cancer prediction using mammography exams for real hospital settings.

Authors

Pathak S,Schlötterer J,Geerdink J,Veltman J,van Keulen M,Strisciuglio N,Seifert C

Affiliations (6)

  • University of Twente, Drienerlolaan 5, Enschede, 7522 NB, The Netherlands; Hospital Group Twente, Geerdinksweg 141, Hengelo, 7555 DL, The Netherlands. Electronic address: [email protected].
  • University of Marburg, Biegenstraße 10, Marburg, 35037, Germany; University of Mannheim, Schloss Ehrenhof Ost, Mannheim, 68161, Germany.
  • Hospital Group Twente, Geerdinksweg 141, Hengelo, 7555 DL, The Netherlands.
  • University of Twente, Drienerlolaan 5, Enschede, 7522 NB, The Netherlands; Hospital Group Twente, Geerdinksweg 141, Hengelo, 7555 DL, The Netherlands.
  • University of Twente, Drienerlolaan 5, Enschede, 7522 NB, The Netherlands.
  • University of Marburg, Biegenstraße 10, Marburg, 35037, Germany.

Abstract

Breast cancer prediction models for mammography assume that annotations are available for individual images or regions of interest (ROIs), and that there is a fixed number of images per patient. These assumptions do not hold in real hospital settings, where clinicians provide only a final diagnosis for the entire mammography exam (case). Since data in real hospital settings scales with continuous patient intake, while manual annotation efforts do not, we develop a framework for case-level breast cancer prediction that does not require any manual annotation and can be trained with case labels readily available at the hospital. Specifically, we propose a two-level multi-instance learning (MIL) approach at patch and image level for case-level breast cancer prediction and evaluate it on two public and one private dataset. We propose a novel domain-specific MIL pooling observing that breast cancer may or may not occur in both sides, while images of both breasts are taken as a precaution during mammography. We propose a dynamic training procedure for training our MIL framework on a variable number of images per case. We show that our two-level MIL model can be applied in real hospital settings where only case labels, and a variable number of images per case are available, without any loss in performance compared to models trained on image labels. Only trained with weak (case-level) labels, it has the capability to point out in which breast side, mammography view and view region the abnormality lies.

Topics

Journal Article

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.