Generative AI enables medical image segmentation in ultra low-data regimes.

Authors

Zhang L,Jindal B,Alaa A,Weinreb R,Wilson D,Segal E,Zou J,Xie P

Affiliations (11)

  • Department of Electrical and Computer Engineering, University of California San Diego, La Jolla, CA, USA.
  • Bakar Computational Health Sciences Institute, University of California San Francisco, San Francisco, CA, USA.
  • Department of Electrical Engineering and Computer Sciences, University of California Berkeley, Berkeley, CA, USA.
  • Hamilton Glaucoma Center, Shiley Eye Institute, Viterbi Family Department of Ophthalmology, University of California San Diego, La Jolla, CA, USA.
  • Division of Pulmonary, Allergy and Critical Care Medicine, Department of Medicine, University of Pittsburgh, Pittsburgh, PA, USA.
  • Department of Computer Science and Applied Mathematics, Weizmann Institute of Science, Rehovot, Israel.
  • Department of Molecular Cell Biology, Weizmann Institute of Science, Rehovot, Israel.
  • Department of Biomedical Data Science, Stanford University School of Medicine, Stanford, CA, USA.
  • Department of Computer Science, Stanford University, Stanford, CA, USA.
  • Department of Electrical and Computer Engineering, University of California San Diego, La Jolla, CA, USA. [email protected].
  • Department of Medicine, University of California San Diego, La Jolla, CA, USA. [email protected].

Abstract

Semantic segmentation of medical images is pivotal in applications like disease diagnosis and treatment planning. While deep learning automates this task effectively, it struggles in ultra low-data regimes for the scarcity of annotated segmentation masks. To address this, we propose a generative deep learning framework that produces high-quality image-mask pairs as auxiliary training data. Unlike traditional generative models that separate data generation from model training, ours uses multi-level optimization for end-to-end data generation. This allows segmentation performance to guide the generation process, producing data tailored to improve segmentation outcomes. Our method demonstrates strong generalization across 11 medical image segmentation tasks and 19 datasets, covering various diseases, organs, and modalities. It improves performance by 10-20% (absolute) in both same- and out-of-domain settings and requires 8-20 times less training data than existing approaches. This greatly enhances the feasibility and cost-effectiveness of deep learning in data-limited medical imaging scenarios.

Topics

Deep LearningImage Processing, Computer-AssistedDiagnostic ImagingJournal Article

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.