HResFormer: Hybrid Residual Transformer for Volumetric Medical Image Segmentation.
Authors
Abstract
Vision Transformer shows great superiority in medical image segmentation due to the ability to learn long-range dependency. For medical image segmentation from 3-D data, such as computed tomography (CT), existing methods can be broadly classified into 2-D-based and 3-D-based methods. One key limitation in 2-D-based methods is that the intraslice information is ignored, while the limitation in 3-D-based methods is the high computation cost and memory consumption, resulting in a limited feature representation for inner slice information. During the clinical examination, radiologists primarily use the axial plane and then routinely review both axial and coronal planes to form a 3-D understanding of anatomy. Motivated by this fact, our key insight is to design a hybrid model that can first learn fine-grained inner slice information and then generate a 3-D understanding of anatomy by incorporating 3-D information. We present a novel Hybrid Residual TransFormer (HResFormer) for 3-D medical image segmentation. Building upon standard 2-D and 3-D Transformer backbones, HResFormer involves two novel key designs: 1) a Hybrid Local-Global fusion Module (HLGM) to effectively and adaptively fuse inner slice information from 2-D Transformers and intraslice information from 3-D volumes for 3-D Transformers with local fine-grained and global long-range representation and 2) residual learning of the hybrid model, which can effectively leverage the inner slice and intraslice information for better 3-D understanding of anatomy. Experiments show that our HResFormer outperforms prior art on widely used medical image segmentation benchmarks. This article sheds light on an important but neglected way to design Transformers for 3-D medical image segmentation.