Paradigm-Shifting Attention-based Hybrid View Learning for Enhanced Mammography Breast Cancer Classification with Multi-Scale and Multi-View Fusion.
Authors
Abstract
Breast cancer poses a serious threat to women's health, and its early detection is crucial for enhancing patient survival rates. While deep learning has significantly advanced mammographic image analysis, existing methods struggle to balance between view consistency with input adaptability. Furthermore, current models face challenges in accurately capturing multi-scale features, especially when subtle lesion variations across different scales are involved. To address this challenge, this paper proposes a Hybrid View Learning (HVL) paradigm that unifies traditional Single-View and Multi-View Learning approaches. The core component of this paradigm, our Attention-based Hybrid View Learning (AHVL) framework, incorporates two essential attention mechanisms: Contrastive Switch Attention (CSA) and Selective Pooling Attention (SPA). The CSA mechanism flexibly alternates between self-attention and cross-attention based on data integrity, integrating a pre-trained language model for contrastive learning to enhance model stability. Meanwhile, the SPA module employs multi-scale feature pooling and selection to capture critical features from mammographic images, overcoming the limitations of traditional models that struggle with fine-grained lesion detection. Experimental validation on the INbreast and CBIS-DDSM datasets shows that the AHVL framework outperforms both single-view and multi-view methods, especially under extreme view missing conditions. Even with an 80% missing rate on both datasets, AHVL maintains the highest accuracy and experiences the smallest performance decline in metrics like F1 score and AUC-PR, demonstrating its robustness and stability. This study redefines mammographic image analysis by leveraging attention-based hybrid view processing, setting a new standard for precise and efficient breast cancer diagnosis.