Back to all papers

Generalist foundation models from a multimodal dataset for 3D computed tomography.

February 12, 2026pubmed logopapers

Authors

Hamamci IE,Er S,Wang C,Almas F,Simsek AG,Esirgun SN,Dogan I,Durugol OF,Hou B,Shit S,Dai W,Xu M,Reynaud H,Dasdelen MF,Wittmann B,Amiranashvili T,Simsar E,Simsar M,Erdemir EB,Alanbay A,Sekuboyina A,Lafci B,Kaplan A,Lu Z,Polacin M,Kainz B,Bluethgen C,Batmanghelich K,Ozdemir MK,Menze B

Affiliations (12)

  • Department of Quantitative Biomedicine, University of Zurich, Zurich, Switzerland. [email protected].
  • ETH AI Center, ETH Zurich, Zurich, Switzerland. [email protected].
  • International School of Medicine, Istanbul Medipol University, Istanbul, Turkey. [email protected].
  • Department of Quantitative Biomedicine, University of Zurich, Zurich, Switzerland.
  • International School of Medicine, Istanbul Medipol University, Istanbul, Turkey.
  • Department of Electrical and Computer Engineering, Boston University, Boston, MA, USA.
  • Division of Intramural Research, National Institutes of Health, Bethesda, MD, USA.
  • ETH AI Center, ETH Zurich, Zurich, Switzerland.
  • Department of Computing, Imperial College London, London, UK.
  • Department of Computer Science, ETH Zurich, Zurich, Switzerland.
  • Institute for Diagnostic and Interventional Radiology, University Hospital Zurich, Zurich, Switzerland.
  • Department Artificial Intelligence in Biomedical Engineering, FAU Erlangen-Nürnberg, Erlangen, Germany.

Abstract

Advancements in medical imaging AI, particularly in 3D imaging, have been limited due to the scarcity of comprehensive datasets. We introduce CT-RATE, a public dataset that pairs 3D medical images with corresponding textual reports. CT-RATE comprises 25,692 non-contrast 3D chest CT scans from 21,304 unique patients. Each scan is accompanied by its corresponding radiology report. Leveraging CT-RATE, we develop CT-CLIP, a CT-focused contrastive language-image pretraining framework designed for broad applications without the need for task-specific training. We demonstrate how CT-CLIP can be used in multi-abnormality detection and case retrieval, and outperforms state-of-the-art fully supervised models across all key metrics. By combining CT-CLIP's vision encoder with a pretrained large language model, we create CT-CHAT, a vision-language foundational chat model for 3D chest CT volumes. Fine-tuned on over 2.7 million question-answer pairs derived from the CT-RATE dataset, CT-CHAT underscores the necessity for specialized methods in 3D medical imaging. Collectively, the open-source release of CT-RATE, CT-CLIP and CT-CHAT not only addresses critical challenges in 3D medical imaging but also lays the groundwork for future innovations in medical AI and improved patient care.

Topics

Journal Article

Ready to Sharpen Your Edge?

Subscribe to join 11k+ peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.