Back to all papers

Integrating deep learning and radiomics for precise identification of luminal A/B breast cancer subtypes on dynamic contrast-enhanced MRI.

February 3, 2026pubmed logopapers

Authors

Shangguan J,Shchukina E,Monov D,Larina S

Affiliations (4)

  • Radiology Department, Puyang Oilfield General Hospital, Puyang, China. [email protected].
  • Department of Psychiatry and Narcology, Sechenov First State Medical University, Moscow, Russia.
  • Department of Anaesthesiology and Intensive Care, Medical University Sofia, Sofia, Bulgaria.
  • Department of Biology and General Genetics, Sechenov First State Medical University, Moscow, Russia.

Abstract

Accurate differentiation between luminal A and B subtypes of breast cancer is critical for selecting therapeutic strategies. However, current approaches rely predominantly on invasive biopsy and immunohistochemical (IHC) analysis. Therefore, the development of non-invasive imaging-based methods capable of reliably classifying tumor subtypes remains an urgent task. To develop and validate a hybrid classification model combining radiomic and deep learning features extracted from dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI) to differentiate between luminal A and B subtypes of invasive breast cancer. The study included 312 women from China, Russia and Bulgaria with confirmed luminal subtypes of breast cancer. All patients underwent standardized pre-treatment DCE-MRI, and subtypes were determined using IHC. Tumors were semi-automatically segmented, and radiomic features were extracted using PyRadiomics. Additionally, deep features were extracted from DCE-MRI using a 3D ResNet-50 convolutional neural network. Three models were constructed: a radiomics-based model, a deep learning-based model, and a hybrid model that integrated both approaches using a stacking ensemble method. Model performance was evaluated using AUC, sensitivity, specificity, and other metrics on a test dataset and an independent external validation cohort (n = 148). SHAP and Grad-CAM techniques were applied for model interpretability. The hybrid model significantly outperformed the individual approaches, achieving an AUC of 0.921, sensitivity of 88.6%, and specificity of 89.7% on the test dataset. Performance remained robust in the external validation cohort (AUC = 0.903). Statistical tests (DeLong and bootstrapping) confirmed the significance of these differences. The most important contributors were radiomic features related to shape and texture (e.g., entropy, sphericity) and high-level deep features. Visualizations highlighted clinically relevant model attention areas. The proposed hybrid approach represents a clinically applicable, non-invasive method for classifying breast cancer subtypes, potentially complementing or partially replacing biopsy in selected cases. It enhances diagnostic accuracy while maintaining interpretability. Future work will focus on prospective validation and integration with genomic and clinical data within the framework of precision oncology.

Topics

Journal Article

Ready to Sharpen Your Edge?

Subscribe to join 9,500+ peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.