XGeM: A multi-prompt foundation model for multimodal medical data generation.
Authors
Affiliations (8)
Affiliations (8)
- Unit of Artificial Intelligence and Computer Systems, Department of Engineering, Università Campus Bio-Medico di Roma, Roma, Italy. Electronic address: [email protected].
- Department of Diagnostics and Intervention, Biomedical Engineering and Radiation Physics, Umeå University, Umeå, Sweden. Electronic address: [email protected].
- Department of Radiology and Interventional Radiology, Fondazione Policlinico Universitario Campus Bio-Medico, Rome, Italy; Research Unit of Radiology and Interventional Radiology, Department of Medicine and Surgery, Università Campus Bio-Medico di Roma, Rome, Italy. Electronic address: [email protected].
- Department of Diagnostic Imaging and Stereotactic Radiosurgey, Centro Diagnostico Italiano S.p.A., Milano, Italy. Electronic address: [email protected].
- Department of Radiology and Interventional Radiology, Fondazione Policlinico Universitario Campus Bio-Medico, Rome, Italy. Electronic address: [email protected].
- College of Computer Science and Software Engineering, Shenzhen University, Shenzhen, China. Electronic address: [email protected].
- Unit of Artificial Intelligence and Computer Systems, Department of Engineering, Università Campus Bio-Medico di Roma, Roma, Italy. Electronic address: [email protected].
- Unit of Artificial Intelligence and Computer Systems, Department of Engineering, Università Campus Bio-Medico di Roma, Roma, Italy; Department of Diagnostics and Intervention, Biomedical Engineering and Radiation Physics, Umeå University, Umeå, Sweden. Electronic address: [email protected].
Abstract
The adoption of Artificial Intelligence in medical imaging holds great promise, yet it remains hindered by challenges such as data scarcity, privacy concerns, and the need for robust multimodal integration. While recent advances in generative modeling have enabled high-quality synthetic data generation, existing approaches are often limited to unimodal, unidirectional synthesis and therefore lack the ability to jointly synthesize multiple modalities while preserving clinical consistency. To address this challenge, we introduce XGeM, a 6.77-billion-parameter multimodal generative model designed to support flexible, any-to-any synthesis between medical data modalities. XGeM constructs a shared latent space via contrastive learning and introduces a novel Multi-Prompt Training strategy, enabling conditioning on arbitrary subsets of input modalities. This design allows the model to adapt to heterogeneous clinical inputs and generate multiple outputs jointly, preserving both semantic and structural coherence. We extensively validate XGeM by first benchmarking it against five competitors on the MIMIC-CXR dataset, a state-of-the-art dataset for multi-view Chest X-ray and radiological report generation. Secondly, we perform a Visual Turing Test with expert radiologists to assess the realism and clinical relevance of the generated data, ensuring alignment with real-world scenarios. Finally, we demonstrate how XGeM can support key medical data challenges such as anonymization, class imbalance, and data scarcity, underscoring its utility as a foundation model for medical data synthesis. Project page is at https://cosbidev.github.io/XGeM/.