Back to all papers

Application of deep learning technology in breast cancer: a systematic review of segmentation, detection, and classification approaches.

January 4, 2026pubmed logopapers

Authors

Gao S,Liu J,Li L,Yang D,Miao Y,Zhang X,Han Q,Shi Y,Wu J,Zhang K

Affiliations (12)

  • Information Center, Affiliated Hospital of Hebei University, Baoding, China.
  • Basic Research Key Laboratory of General Surgery for Digital Medicine, Affiliated Hospital of Hebei University, Baoding, China.
  • Medical Affairs Department, Affiliated Hospital of Hebei University, Baoding, China.
  • Institute of Life Science and Green Development, Hebei University, Baoding, China.
  • 3D Image and 3D Printing Center, Affiliated Hospital of Hebei University, Baoding, China.
  • Clinical Medical College of Hebei University, Affiliated Hospital of Hebei University, Baoding, China.
  • Ultrasound Department, Affiliated Hospital of Hebei University, Baoding, China.
  • Breast Surgery, Affiliated Hospital of Hebei University, Baoding, China.
  • Basic Research Key Laboratory of General Surgery for Digital Medicine, Affiliated Hospital of Hebei University, Baoding, China. [email protected].
  • Institute of Life Science and Green Development, Hebei University, Baoding, China. [email protected].
  • 3D Image and 3D Printing Center, Affiliated Hospital of Hebei University, Baoding, China. [email protected].
  • Thoracic Surgery Department, Affiliated Hospital of Hebei University, Baoding, China. [email protected].

Abstract

To provide a critical and clinically oriented synthesis of recent deep learning developments for breast cancer imaging across major modalities, with emphasis on model architectures, dataset characteristics, methodological quality, and implications for clinical translation. Following PRISMA guidelines, we systematically searched PubMed, Scopus, Web of Science, ScienceDirect, and Google Scholar for studies published from 2020 to 2024 on deep learning applied to breast imaging. Sixty-five studies using convolutional neural networks (CNNs), Transformers, or hybrid architectures were included. Datasets were comparatively profiled, and study quality and risk of bias were appraised using QUADAS-2. CNN-based classifiers, particularly on mammography and pathology, commonly achieved median accuracies above 90% and AUCs around or above 0.95, while CNN detectors reported high sensitivities and mid-90% accuracies, supporting their potential role as second readers. CNN-derived U-Net variants dominated segmentation tasks, yielding high Dice and IoU values for tumour and fibroglandular-tissue delineation. Transformer and hybrid models showed advantages when global context, multi-view inputs or volumetric data were critical (e.g. dense breasts, DBT, DCE-MRI), where they improved lesion localisation and patient-level risk stratification. However, QUADAS-2 and dataset profiling revealed substantial limitations: most studies were retrospective, single-centre and class-imbalanced, with narrow demographic representation, heterogeneous reference standards and scarce external or prospective validation. These factors raise concerns about bias, overfitting, fairness and robustness in real-world deployment. Only a minority of studies systematically addressed interpretability, workflow integration or regulatory requirements. Deep learning offers considerable promise to support early detection, risk stratification and workflow efficiency across breast imaging modalities, with CNNs and Transformers providing complementary strengths for local fine-detail versus global contextual modelling. Nevertheless, the current evidence base is constrained by heterogeneous designs, limited reporting of study quality and biased datasets, so reported performance should not be interpreted as definitive proof of clinical readiness. Future research should prioritise multi-centre, demographically diverse cohorts, transparent quality assessment, external and prospective validation, and evaluation of reader and workflow impact. Developing explainable, fairness-aware and privacy-preserving systems-such as those enabled by interpretable architectures and federated learning-will be essential for safe and equitable translation of deep learning tools into routine breast cancer care.

Topics

Journal ArticleReview

Ready to Sharpen Your Edge?

Subscribe to join 8,000+ peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.