Back to all papers

DPGNet: A Boundary-Aware Medical Image Segmentation Framework Via Uncertainty Perception.

Authors

Wang H,Qi Y,Liu W,Guo K,Lv W,Liang Z

Abstract

Addressing the critical challenge of precise boundary delineation in medical image segmentation, we introduce DPGNet, an adaptive deep learning model engineered to emulate expert perception of intricate anatomical edges. Our key innovations drive its superior performance and clinical utility, encompassing: 1) a three-stage progressive refinement strategy that establishes global context, performs hierarchical feature enhancement, and precisely delineates local boundaries; 2) a novel Edge Difference Attention (EDA) module that implicitly learns and quantifies boundary uncertainties without requiring explicit ground truth supervision; and 3) a lightweight, transformer-based architecture ensuring an exceptional balance between performance and computational efficiency. Extensive experiments across diverse and challenging medical image datasets demonstrate DPGNet's consistent superiority over state-of-the-art methods, notably achieving this with significantly lower computational overhead (25.51 M parameters). Its exceptional boundary refinement is rigorously validated through comprehensive metrics (Boundary-IoU, HD95) and confirmed by rigorous clinical expert evaluations. Crucially, DPGNet generates an explicit uncertainty boundary map, providing clinicians with actionable insights to identify ambiguous regions, thereby enhancing diagnostic precision and facilitating more accurate clinical segmentation outcomes. Our code is available at: https://github.fangnengwuyou/DPGNet.

Topics

Journal Article

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.