Back to all papers

ResTRANS3D hybrid framework for data-efficient 3D medical image segmentation.

March 12, 2026pubmed logopapers

Authors

Sun Y,Chen W

Affiliations (1)

  • Adelaide University, South Australia, SA 5005, Australia.

Abstract

Deep learning has become an important tool for 3D medical image segmentation, where learning effective representations from limited labeled data remains essential for practical deployment. Here, we present ResTRANS3D, a data-efficient self-supervised hybrid framework that combines a 3D-ResNet encoder with a multi-scale Transformer through a residual interaction mechanism to jointly model local spatial structures and long-range contextual dependencies. A dynamic position learning module generates adaptive positional representations conditioned on multi-scale features, while selective self-attention reduces the computational cost of global attention. The model is pretrained using a dual self-supervised strategy that integrates contrastive learning and image reconstruction. Experiments on multiple public 3D medical image benchmarks show that ResTRANS3D supports effective downstream segmentation, particularly when labeled data are limited. These results highlight the potential of hybrid representation learning to improve data-efficient 3D medical image analysis.

Topics

Journal Article

Ready to Sharpen Your Edge?

Subscribe to join 11k+ peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.