Back to all papers

Unsupervised Cross-Modality MR Image Segmentation Via Prompt-Driven Foundation Model.

November 3, 2025pubmed logopapers

Authors

Ma W,He K,Zhang J,Wang H,Zhang L,Zhang K,Yang Q,Wong LY,Shen W,Zhang H,Dou Q

Abstract

Obtaining pixel-level expert annotations is expensive and labor-intensive in the medical imaging field, especially for multi-modality imaging data like MR. Most conventional cross-modality segmentation methods rely on unsupervised domain adaptation to achieve efficient cross-domain segmentation. However, these methods are often hindered by discrepancies between the source and target domains. In this paper, we propose a new scheme for cross-modality segmentation based on foundation models, which uses spatial consistency across multiple modalities and is not affected by discrepancies between the source and target domains. This scheme allows us to use annotated data from one imaging modality to train a network capable of performing accurate segmentation on other target imaging modalities, without the need for labels or registration processes. Specifically, we propose using a SAM-based model that uses segmentation results from one imaging modality as pseudo labels and prompts to guide training and testing in the target imaging modality. Moreover, we introduce consistency-based prompt tuning and hybrid representation learning to address potential unregistered issues and noisy label problems that may arise in cross-modality segmentation. We conducted extensive validation experiments on two internal datasets and one public dataset, including liver lesion segmentation and liver segmentation. Our method demonstrates significant improvement compared to current state-of-the-art approaches.

Topics

Journal Article

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.