Back to all papers

RetinexDA: Progressive Disentanglement Domain Adaptation for Unsupervised Cross-Modality Medical Image Segmentation.

April 28, 2026pubmed logopapers

Authors

Wu Y,Yin M,Kong Z,Chen J,Wu J,Gao H,Xu H

Abstract

Deep neural networks have achieved strong performance in medical image segmentation when the training and testing data share similar appearance characteristics. However, this assumption is rarely satisfied in practical clinical scenarios, where imaging protocols, scanner vendors, and modality physics differ substantially, resulting in severe performance degradation when the model is deployed to new environments. To address this challenge, we propose RetinexDA, a novel unsupervised domain adaptation framework that explicitly decomposes a medical image into domain-invariant structural and domain-specific appearance representations. This Retinex-inspired formulation preserves essential anatomical details while mitigating modality-dependent variations. Furthermore, we introduce Disentangled Knowledge Distillation (DKD) to ensure mutual semantic alignment between the structure-appearance decomposition in pixel space and the encoded features in latent space, strengthening fine-grained segmentation capability. In addition, a Bézier-curve domain bridging strategy is developed to generate smoothly transitioned intermediate samples across domains, improving adaptation robustness under large modality discrepancies. Extensive experiments on abdominal CT and cardiac MRI segmentation tasks demonstrate that RetinexDA surpasses state-of-the-art unsupervised domain adaptation approaches, showing strong potential for scalable and reliable clinical deployment.

Topics

Journal Article

Ready to Sharpen Your Edge?

Subscribe to join 11k+ peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.