Back to all papers

Deep learning-based breast cancer detection with customized ensemble attention.

April 29, 2026pubmed logopapers

Authors

Meva DT,Popat K,Kukadiya H,Jariwala N

Affiliations (4)

  • Faculty of Computer Applications, Marwadi University, Rajkot, Gujarat, India. [email protected].
  • Faculty of Computer Applications, Marwadi University, Rajkot, Gujarat, India.
  • Institute of Computer Science and Applications, Gandhinagar University, Gandhinagar, Gujarat, India.
  • Smt. Tanuben & Dr. Manubhai Trivedi College of Information Science, Veer Narmad South Gujarat University, Surat, Gujarat, India.

Abstract

Breast cancer remains one of the leading malignancies globally, and accurate diagnostic decisions at the early stages of the disease can significantly improve patient prognosis. This paper introduces a new ensemble deep learning framework, combining three specific attention mechanisms tailored to the breast pathology domain, namely, Multi-Scale Channel Attention (MSCA), Spatial-Morphological Attention (SMA), and Hierarchical Dual Attention (HDA). Unlike generic SENet and CBAM mechanisms, these modules are specifically designed for breast pathology: MSCA captures nuclear-level channel statistics across multiple pooling scales, SMA applies learnable morphological gradient operators to tissue boundary regions, and HDA employs depth-dependent gating to dynamically balance spatial and semantic attention across network stages. While domain-adaptive attention has been explored in medical imaging, the specific combination of multi-scale nuclear statistics, learnable morphological gradients, and depth-adaptive gating tailored jointly to histopathological and mammographic imaging represents a distinct and novel architectural contribution. The framework incorporates three complementary CNN backbones (ResNet-50, DenseNet-121, and EfficientNet-B3) augmented with the proposed attention modules, and intelligently fuses their predictions using a confidence- and performance-based weighted ensemble strategy. Comprehensive experiments were conducted on three publicly available benchmark datasets: the BreakHis histopathology dataset (with 1,995 different images at 40× magnification), the BACH challenge dataset (with 400 WSIs producing 9,600 patches), and the CBIS-DDSM mammography dataset (with 2,620 cases). The proposed framework yields statistically significant (p < 0.05; paired t-test across five folds) improvements upon individual backbone constituents and surpasses attention-augmented baselines SENet and CBAM by up to 1.8 pp on BreakHis (95.6 ± 0.5% acc), BACH (91.3 ± 0.8% acc), and CBIS-DDSM (93.7 ± 1.0% acc) Ablation studies confirmed the contribution of each attention module (MSCA + 1.6%, SMA + 1.2%, HDA + 1.0%) individually, and a clinical reader study revealed that AI assistance significantly improved the accuracy of pathologists on more challenging cases (accuracy increasing from 85.4 to 90.2%; p = 0.012). Attention visualizations exhibited clinically meaningful attention overlap with pathologist-annotated ROIs (mean IoU = 0.76) compared to generic CBAM-based attention (IoU = 0.59).

Topics

Journal Article

Ready to Sharpen Your Edge?

Subscribe to join 11k+ peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.