Back to all papers

Uncertainty- and hardness-weighted loss functions for medical image segmentation.

January 1, 2026pubmed logopapers

Authors

Zheng Y,Wu Y,Chen J,Yang X,Zhang H,Yi Q,Pu J,Wang L

Affiliations (7)

  • Wenzhou Third Clinical Institute Affiliated to Wenzhou Medical University, Third Affiliated Hospital of Shanghai University, Wenzhou People's Hospital, Wenzhou 32500, China.
  • School of biomedical engineering and imaging sciences, King's college London, London, UK.
  • National Engineering Research Center of Ophthalmology and Optometry, Eye Hospital, Wenzhou Medical University, Wenzhou, 325027, China.
  • The Affiliated Ningbo Eye Hospital of Wenzhou Medical University, Ningbo 315042, China.
  • Departments of Radiology, Bioengineering, and Ophthalmology, University of Pittsburgh, Pittsburgh, PA 15213, United States.
  • National Clinical Research Center for Ocular Diseases, Eye Hospital, Wenzhou Medical University, Wenzhou, 325027, China.
  • Zhejiang Key Laboratory of Ophthalmic Drug Discovery and Medical Device Research, Eye Hospital, Wenzhou Medical University, Wenzhou, 325027, China.

Abstract

Accurate segmentation of medical images is essential for various image processing tasks and is now predominantly achieved using deep learning techniques. However, existing approaches often employ loss functions that fail to account for pixel-level differences in prediction uncertainty or hardness. This limitation frequently results in relatively large segmentation errors, particularly in object boundary regions. To address the limitation, we developed a novel class of uncertainty- / hardness-weighted loss functions by introducing two distinct pixel-wise weighting schemes: probability-guided uncertainty (PGU) and region-enhanced hardness (REH) weights. These weights, derived from the differences between network predictions and their corresponding ground truths, were designed to emphasize challenging pixels while reducing segmentation uncertainties. We validated these loss functions by integrating them with two classical neural networks, i.e., Swin Transformer based U-shape network (Swin-Unet) and V-shape network (V-Net) to segment two- and three-dimensional target objects across four different images datasets, including Retinal Fundus Glaucoma Challenge (REFUGE) dataset, Retinal Vascular Tree Analysis (RETA) dataset, optical coherence tomography (OCT) dataset, and Atria Segmentation Challenge (ASC) dataset. Extensive experiments demonstrated that our developed loss functions outperformed classical losses, such as cross-entropy (CE) and Dice losses, along with their variants, highlighting the effectiveness and generalization of the introduced weighting schemes. The source code is available at https://github.com/wmuLei/uhLoss.

Topics

Journal Article

Ready to Sharpen Your Edge?

Subscribe to join 7,800+ peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.