Back to all papers

DCM-Net: dual-encoder CNN-Mamba network with cross-branch fusion for robust medical image segmentation.

Authors

Atabansi CC,Wang S,Li H,Nie J,Xiang L,Zhang C,Liu H,Zhou X,Li D

Affiliations (6)

  • School of Microelectronics and Communication Engineering, Chongqing University, Chongqing, 400044, China. [email protected].
  • Department of Hepatobiliary Pancreatic Tumor Center, Chongqing Key Laboratory of Translational Research for Cancer Metastasis and Individualized Treatment, Chongqing University Cancer Hospital, Chongqing, 400030, China.
  • Department of Hepatobiliary Pancreatic Tumor Center, Chongqing Key Laboratory of Translational Research for Cancer Metastasis and Individualized Treatment, Chongqing University Cancer Hospital, Chongqing, 400030, China. [email protected].
  • School of Microelectronics and Communication Engineering, Chongqing University, Chongqing, 400044, China.
  • School of Microelectronics and Communication Engineering, Chongqing University, Chongqing, 400044, China. [email protected].
  • Department of Hepatobiliary Pancreatic Tumor Center, Chongqing Key Laboratory of Translational Research for Cancer Metastasis and Individualized Treatment, Chongqing University Cancer Hospital, Chongqing, 400030, China. [email protected].

Abstract

Medical image segmentation is a critical task for the early detection and diagnosis of various conditions, such as skin cancer, polyps, thyroid nodules, and pancreatic tumors. Recently, deep learning architectures have achieved significant success in this field. However, they face a critical trade-off between local feature extraction and global context modeling. To address this limitation, we present DCM-Net, a dual-encoder architecture that integrates pretrained CNN layers with Visual State Space (VSS) blocks through a Cross-Branch Feature Fusion Module (CBFFM). A Decoder Feature Enhancement Module (DFEM) combines depth-wise separable convolutions with MLP-based semantic rectification to extract enhanced decoded features and improve the segmentation performance. Additionally, we present a new 2D pancreas and pancreatic tumor dataset (CCH-PCT-CT) collected from Chongqing University Cancer Hospital, comprising 3,547 annotated CT slices, which is used to validate the proposed model. The proposed DCM-Net architecture achieves competitive performance across all datasets investigated in this study. We develop a novel DCM-Net architecture that generates robust features for tumor and organ segmentation in medical images. DCM-Net significantly outperforms all baseline models in segmentation tasks, with higher Dice Similarity Coefficient (DSC) and mean Intersection over Union (mIoU) scores. Its robustness confirms strong potential for clinical use.

Topics

Journal Article

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.