Rethinking Privacy in Medical Imaging AI: From Metadata and Pixel-level Identification Risks to Federated Learning and Synthetic Data Challenges.
Authors
Affiliations (5)
Affiliations (5)
- Artificial Intelligence and Translational Imaging (ATI) Lab, Department of Radiology, School of Medicine, University of Crete, Voutes Campus, Street Address, 71003 Heraklion, Crete, Greece.
- Computational BioMedicine Laboratory, Institute of Computer Science, Foundation for Research and Technology (FORTH), Heraklion, Crete, Greece.
- Department of Electrical and Computer Engineering, Hellenic Mediterranean University, Heraklion, Crete, Greece.
- Department of Medical Imaging, University Hospital of Heraklion, Heraklion, Crete, Greece.
- Division of Radiology, Department for Clinical Science, Intervention and Technology (CLINTEC), Karolinska Institutet, Stockholm, Sweden.
Abstract
Metadata, which refers to non-image information such as patient identifiers, acquisition parameters and institutional details, has long been the primary focus of de-identification efforts when constructing datasets for artificial intelligence (AI) applications in medical imaging. However, it is now evident that information intrinsic to the image itself, at the pixel level (eg, intensity values), can also be exploited by deep learning models, potentially revealing sensitive patient data and posing privacy risks. This manuscript discusses both metadata and sources of identifiable information in medical imaging studies, highlighting the potential risks of overlooking their presence. Privacy-preserving approaches such as federated learning and synthetic data generation are also reviewed, with emphasis on their limitations-particularly vulnerabilities to model inversion and inference attacks-that must be considered when developing and deploying AI in medical imaging. ©RSNA, 2025.