Back to all papers

Onco-Seg: Adapting Promptable Concept Segmentation for Multi-Modal Medical Imaging

January 15, 2026medrxiv logopreprint

Authors

Makani, A.,Agrawal, A.,Agrawal, A.

Affiliations (1)

  • Ashoka University

Abstract

Medical image segmentation remains a critical bottleneck in clinical workflows, from diagnostic radiology to radiation oncology treatment planning. We present Onco-Seg, a medical imaging adaptation of Metas Segment Anything Model 3 (SAM3) that leverages promptable concept segmentation for automated tumor and organ delineation across multiple imaging modalities. Unlike previous SAM adaptations limited to single modalities, Onco-Seg introduces a unified framework supporting CT, MRI, ultrasound, dermoscopy, and endoscopy through modality-specific preprocessing and parameter-efficient fine-tuning with Low-Rank Adaptation (LoRA). We train on 35 datasets comprising over 98,000 cases across 8 imaging modalities using sequential checkpoint chaining on a 4-GPU distributed training infrastructure. We evaluate Onco-Seg on 12 benchmark datasets spanning breast, liver, prostate, lung, skin, and gastrointestinal pathologies, achieving strong performance on breast ultrasound (Dice: 0.752 {+/-} 0.24), polyp segmentation (Dice: 0.714 {+/-} 0.32), and liver CT (Dice: 0.641 {+/-} 0.12). We further propose two clinical deployment patterns: an interactive "sidecar" for diagnostic radiology and a "silent assistant" for automated radiation oncology contouring. Our results demonstrate that foundation model adaptation can enable robust multi-modal medical segmentation while maintaining clinical workflow integration.

Topics

oncology

Ready to Sharpen Your Edge?

Subscribe to join 8,600+ peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.