Back to all papers

A novel virtual patient approach for cross-patient multimodal fusion in enhanced breast cancer detection.

December 8, 2025pubmed logopapers

Authors

Akbari Y,Abdullakutty F,Al-Maadeed S,Saady RA,Bouridane A,Hamoudi R

Affiliations (5)

  • Department of Computer Science and Engineering, Qatar University, Doha, 2713, Qatar. Electronic address: [email protected].
  • Department of Computer Science and Engineering, Qatar University, Doha, 2713, Qatar.
  • College of Medicine, QU Health, Qatar University, Doha, 2713, Qatar.
  • Center for Data Analytics and Cybernetics, University of Sharjah, Sharjah, 27272, United Arab Emirates; BIMAI-Lab, Biomedically Informed Artificial Intelligence Laboratory, University of Sharjah, Sharjah, 27272, United Arab Emirates.
  • Research Institute for Medical and Health Sciences, University of Sharjah, Sharjah, 27272, United Arab Emirates; Clinical Sciences Department, College of Medicine, University of Sharjah, Sharjah, 27272, United Arab Emirates; BIMAI-Lab, Biomedically Informed Artificial Intelligence Laboratory, University of Sharjah, Sharjah, 27272, United Arab Emirates; Division of Surgery and Interventional Science, University College London, London, NW3 2PF, United Kingdom.

Abstract

Multimodal medical imaging combining conventional imaging modalities such as mammography, ultrasound, and histopathology has shown significant promise for improving breast cancer detection accuracy. However, clinical implementation faces substantial challenges due to incomplete patient-matched multimodal datasets and resource constraints. Traditional approaches require complete imaging workups from individual patients, limiting their practical applicability. This study investigates whether cross-patient multimodal fusion combining imaging modalities from different patients, can provide additional diagnostic information beyond single-modality approaches. We hypothesize that leveraging complementary information from heterogeneous patient populations enhances cancer detection performance, even when modalities originate from separate individuals. We developed a novel virtual patient framework that systematically combines imaging modalities across different patients based on quality-driven selection strategies. Two training paradigms were evaluated: Fixed scenario with 1:1:1 cross-patient combinations (∼250 virtual patients), and Combinatorial scenario with systematic companion selection (∼20,000 virtual patients). Multiple fusion architectures (concatenation, attention, and averaging) were assessed, and we designed a novel co-attention mechanism that enables sophisticated cross-modal interaction through learned attention weights. These fusion networks were evaluated using histopathology (BCSS), mammography, and ultrasound (BUSI) datasets. External validation using the ICIAR2018 BACH Challenge dataset as an alternative histopathology source demonstrated the generalizability of our approach, achieving promising accuracy despite differences in staining protocols and acquisition procedures across institutions. All models were evaluated on consistent fixed test sets to ensure fair comparison. This dataset is well-suited for multiple breast cancer analysis tasks, including detection, segmentation, and Explainable Artificial Intelligence (XAI) applications. Cross-patient multimodal fusion demonstrated significant improvements over single-modality approaches. The best single modality achieved 75.36% accuracy (mammography), while the optimal fusion combination (histopathology-mammography) reached 97.10% accuracy, representing a 21.74 percentage point improvement. Comprehensive quantitative validation through silhouette analysis (score: 0.894) confirms that the observed performance improvements reflect genuine feature space structure rather than visualization artifacts. Cross-patient multimodal fusion demonstrates significant potential for enhancing breast cancer detection, particularly addressing real-world scenarios where complete patient-matched multimodal data is unavailable. This approach represents a paradigm shift toward leveraging heterogeneous information sources for improved diagnostic performance.

Topics

Journal Article

Ready to Sharpen Your Edge?

Subscribe to join 7,100+ peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.