Back to all papers

An Alignment and Imputation Network (AINet) for Breast Cancer Diagnosis with Multimodal Multi-view Ultrasound Images.

October 24, 2025pubmed logopapers

Authors

Chen H,Li Y,Zhang J,Yang L,Sun Y,Chen Y,Zhou S,Li Z,Qian X,Xu Q,Shen D

Abstract

Recently, numerous deep learning models have been proposed for breast cancer diagnosis using multimodal multi-view ultrasound images. However, their performance could be highly affected by overlooking interactions between different modalities and views. Moreover, existing methods struggle to handle cases where certain modalities or views are missing, which limits their clinical applications. To address these issues, we propose a novel Alignment and Imputation Network (AINet) by integrating 1) alignment and imputation pre-training, and 2) hierarchical fusion fine-tuning. Specifically, in the pre-training stage, cross-modal contrastive learning is employed to align features across different modalities, for effectively capturing inter-modal interactions. To simulate missing modality (view) scenarios, we randomly mask out features and then impute them by leveraging inter-modal and inter-view relationships. Following the clinical diagnosis procedure, the subsequent fine-tuning stage further incorporates modality-level and view-level fusion in a hierarchical manner. The proposed AINet is developed and evaluated on three datasets, comprising 15,223 subjects in total. Experimental results demonstrate that AINet significantly outperforms state-of-the-art methods, particularly in handling missing modalities (views). This highlights its robustness and potential for real-world clinical applications.

Topics

Journal Article

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.