Back to all papers

ProtoSAM for automated one shot medical image segmentation using foundational models.

November 24, 2025pubmed logopapers

Authors

Ayzenberg L,Giryes R,Greenspan H

Affiliations (3)

  • Department of Engineering, Tel Aviv University, Tel Aviv, Israel. [email protected].
  • Department of Engineering, Tel Aviv University, Tel Aviv, Israel.
  • Mount Sinai, Icahn school of Medicine, New York, NY, US.

Abstract

This work presents an advance in one-shot medical image segmentation, where a single image-label sample from a new site is used for finetuning the solution - particularly valuable in scenarios where labeled data is scarce or rapid adaptation to new classes and sites is required. We introduce ProtoSAM, a novel, fully automated framework, for one-shot medical image segmentation that combines Prototypical networks, known for few-shot segmentation, with the Segment Anything Model (SAM), a natural image foundation model for segmentation. The proposed method creates an initial coarse segmentation mask using the ALPnet prototypical network, augmented with a DINOv2 encoder. Following the extraction of an initial mask, prompts are extracted, such as points and bounding boxes, which are then input into SAM. We present extensive validation on multiple datasets including CT, MRI, and endoscopy images, demonstrating state-of-the-art results in many scenarios. Our results show that an untrained ProtoSAM can match or exceed the performance of existing one-shot trained methods, with further improvements possible through self-supervised finetuning of the encoder. Our code is available at: https://github.com/levayz/ProtoSAM/ .

Topics

Image Processing, Computer-AssistedJournal Article

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.