Back to all papers

Decipher-MR: a vision-language foundation model for 3D MRI representations.

April 4, 2026pubmed logopapers

Authors

Yang Z,DSouza N,Megyeri I,Xu X,Shandiz AH,Haddadpour F,Koos K,Rusko L,Valeriano E,Swaminathan B,Wu L,Bhatia P,Kass-Hout T,Bas E

Affiliations (4)

Abstract

Magnetic Resonance Imaging is a critical imaging modality in clinical diagnosis and research, yet its complexity and heterogeneity hinder scalable, generalizable machine learning. Although foundation models have revolutionized language and vision tasks, their application to MRI remains constrained by data scarcity and narrow anatomical focus. We present Decipher-MR, a 3D MRI-specific vision-language foundation model trained on 200,000 MRI series from over 22,000 studies spanning diverse anatomical regions, sequences, and pathologies. Decipher-MR integrates self-supervised vision learning with report-guided text supervision to build robust representations for broad applications. To enable efficient use, Decipher-MR supports a modular design that enables tuning of lightweight, task-specific decoders attached to a frozen pretrained encoder. Following this setting, we evaluate Decipher-MR across disease classification, demographic prediction, anatomical localization, and cross-modal retrieval, demonstrating consistent improvements over existing foundation models and task-specific approaches. These results support Decipher-MR as a promising and reusable foundation for MRI-based AI, within the scope of the tasks and datasets evaluated.

Topics

Journal Article

Ready to Sharpen Your Edge?

Subscribe to join 11k+ peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.