Back to all papers

MedicoSAM: Robust Improvement of SAM for Medical Imaging.

December 17, 2025pubmed logopapers

Authors

Archit A,Freckmann L,Pape C

Abstract

Medical image segmentation is an important analysis task in clinical practice and research. Deep learning has massively advanced the field, but current approaches are mostly based on models trained for a specific task. Training such models or adapting them to a new condition is costly due to the need for labeled data. The emergence of vision foundation models, especially Segment Anything Model (SAM), offers a path to universal segmentation for medical images, overcoming these issues. Here, we study how to improve SAM for medical images by comparing different finetuning strategies on a large and diverse dataset. We evaluate the finetuned models on a wide range of interactive and automatic semantic segmentation tasks. We find that performance clearly improves given the correct choice of finetuning strategies. This improvement is especially pronounced for interactive segmentation. Semantic segmentation also benefits, but the advantage over traditional segmentation approaches is inconsistent. Our best model, MedicoSAM, is publicly available. We show that it is compatible with existing tools for data annotation and believe that it will be of great practical value.

Topics

Journal Article

Ready to Sharpen Your Edge?

Subscribe to join 7,300+ peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.