Back to all papers

Augmented intelligence for multimodal virtual biopsy in breast cancer using generative artificial intelligence.

December 26, 2025pubmed logopapers

Authors

Rofena A,Piccolo CL,Zobel BB,Soda P,Guarrasi V

Affiliations (5)

  • Unit of Artificial Intelligence and Computer Systems, University Campus Bio-Medico of Roma, Rome, Italy. Electronic address: [email protected].
  • Department of Radiology, Fondazione Policlinico Campus Bio-Medico, Rome, Italy.
  • Department of Radiology, Fondazione Policlinico Campus Bio-Medico, Rome, Italy; Department of Radiology, University Campus Bio-Medico of Roma, Rome, Italy.
  • Unit of Artificial Intelligence and Computer Systems, University Campus Bio-Medico of Roma, Rome, Italy; Department of Diagnostics and Intervention, Radiation Physics, Biomedical Engineering, Umeå University, Umeå, Sweden. Electronic address: [email protected].
  • Unit of Artificial Intelligence and Computer Systems, University Campus Bio-Medico of Roma, Rome, Italy.

Abstract

This study aims to propose a multimodal, multi-view deep learning approach for breast cancer virtual biopsy, a non-invasive classification of breast lesions as malignant or benign, by integrating Full-Field Digital Mammography (FFDM) and Contrast-Enhanced Spectral Mammography (CESM). The work addresses the critical challenge of missing CESM data by introducing generative artificial intelligence (AI) to synthesize CESM images when unavailable, ensuring the continuity of diagnostic workflows. The proposed method uses FFDM and CESM images in both craniocaudal (CC) and mediolateral oblique (MLO) views. When CESM is missing, a CycleGAN-based generative model produces synthetic CESM images from FFDM inputs. For classification, three convolutional neural networks (ResNet18, ResNet50, and VGG16) are employed, and a two-stage late fusion strategy integrates view-specific and modality-specific malignancy probabilities, weighted by Matthews Correlation Coefficient (MCC), into a final malignancy score. The system's robustness under varying degrees of missing CESM data is tested by incrementally replacing real CESM inputs with synthetic ones and evaluating classification performance using AUC, G-mean, and MCC. CycleGAN achieved high-fidelity CESM synthesis, with Peak-Signal-to-Noise Ratio exceeding 24 dB and Structural Similarity Index above 0.8 across both CC and MLO views. For lesion classification, the multimodal configuration combining FFDM and CESM consistently outperformed the unimodal FFDM-only setup. Notably, even when CESM was entirely replaced by synthetic images, the multimodal approach still improved virtual biopsy performance compared to FFDM alone. Although classification performance declined as the proportion of synthetic CESM increased, the use of synthetic data remained beneficial. This work demonstrates that generative AI can effectively address missing-modality challenges in breast cancer diagnostics by synthesizing CESM images to enhance FFDM-based virtual biopsy pipelines. In the absence of real CESM data, incorporating synthetic images improves lesion classification compared to using FFDM alone, offering a non-invasive alternative to support clinical decision-making. Moreover, by releasing the extended CESM@UCBM dataset, this study contributes a valuable resource for advancing research and innovation in breast multimodal diagnostic systems.

Topics

Journal Article

Ready to Sharpen Your Edge?

Subscribe to join 7,700+ peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.