Back to all papers

3DSN-net: dual-tandem attention mechanism interaction network for breast tumor classification.

Authors

Li L,Wang M,Li D,Yang T

Affiliations (5)

  • Department of Breast Surgery, Ningbo Medical Center LiHuiLi Hospital, Ningbo, 315048, China.
  • Key Laboratory of Advanced Manufacturing and Intelligent Technology Ministry of Education, School of Mechanical and Power Engineering, Harbin University of Science and Technology, Harbin, 150080, China.
  • Key Laboratory of Advanced Manufacturing and Intelligent Technology Ministry of Education, School of Mechanical and Power Engineering, Harbin University of Science and Technology, Harbin, 150080, China. [email protected].
  • School of Automation, Harbin University of Science and Technology, Harbin, 150080, China. [email protected].
  • School of Automation, Harbin University of Science and Technology, Harbin, 150080, China.

Abstract

Breast cancer is one of the most prevalent malignancies among women worldwide and remains a major public health concern. Accurate classification of breast tumor subtypes is essential for guiding treatment decisions and improving patient outcomes. However, existing deep learning methods for histopathological image analysis often face limitations in balancing classification accuracy with computational efficiency, while failing to fully exploit the deep semantic features in complex tumor images. We developed 3DSN-net, a dual-attention interaction network for multiclass breast tumor classification. The model combines two complementary strategies: (i) spatial–channel attention mechanisms to strengthen the representation of discriminative features, and (ii) deformable convolutional layers to capture fine-grained structural variations in histopathological images. To further improve efficiency, a lightweight attention component was introduced to support stable gradient propagation and multi-scale feature fusion Experimental findings demonstrate that 3DSN-net consistently outperforms competing methods in both accuracy and robustness while maintaining favorable computational efficiency. The model effectively distinguishes benign and malignant tumors as well as multiple subtypes, highlighting the advantages of combining spatial–channel attention with deformable feature modeling. The model was trained and evaluated on two histopathological datasets, BreakHis and BCPSD, and benchmarked against several state-of-the-art CNN and Transformer-based approaches under identical experimental conditions. Experimental results show that 3DSN-net consistently outperforms baseline CNN and Transformer models, achieving 92%–100% accuracy for benign tumors and 86%–99% for malignant tumors, with error rates below 8%. On average, it improves classification accuracy by 3%–5% and ROC-AUC by 0.02 to 0.04 compared with state-of-the-art methods, while maintaining competitive computational efficiency. By enhancing the interaction between spatial and channel attention mechanisms, the model effectively distinguishes breast cancer subtypes, with only a slight reduction in classification speed on larger datasets due to increased data complexity. This study presents 3DSN-net as a reliable and effective framework for breast tumor classification from histopathological images. Beyond methodological improvements, the enhanced diagnostic performance has direct clinical implications, offering potential to reduce misclassification, assist pathologists in decision-making, and improve patient outcomes. The approach can also be extended to other medical imaging tasks. Future work will focus on optimizing computational efficiency and validating generalizability across larger, multi-center datasets. The online version contains supplementary material available at 10.1186/s12880-025-01936-2.

Topics

Journal Article

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.