Back to all papers

U-GRKAN: An Efficient and Interpretable Architecture for Medical Image Segmentation.

November 6, 2025pubmed logopapers

Authors

Wu X,Ji S,Tao J,Gu Y

Affiliations (2)

  • School of Information Engineering, Huzhou University, Huzhou, 313000, Zhejiang, China.
  • School of Information Engineering, Huzhou University, Huzhou, 313000, Zhejiang, China. [email protected].

Abstract

Segmentation accuracy and consistency directly affect the safety of treatment and the reliability of decision-making in tumor delineation, organ-at-risk protection, preoperative planning, and follow-up evaluation. The U-shaped convolutional network still has limitations in cross-regional modeling and adaptive cross-layer fusion, while the Transformer/hybrid architecture, although capable of encoding global context, is computationally expensive and has limited interpretability. To address these issues, we propose a multi-group rational KAN (MGR-KAN) and embed it into the U-shaped framework to form U-GRKAN: replacing each edge B-spline with group-shared and inter-group diversified grouped rational functions, reducing the number of parameters by 48% compared to U-KAN, and achieving function-level interpretability through group-level response curves; further using channel attention for adaptive cross-layer fusion. We assessed four datasets (BUSI, GlaS, CVC, and COVID-19-CT-Seg) and achieved scores of 67.85/80.58, 88.25/93.75, 86.63/92.74, and 78.57/87.97 (IoU/F1), respectively, which represent improvements of 2.63/2.01, 0.75/0.42, 1.58/0.86, and 2.51/1.61 over the second-ranked model. Overall, U-GRKAN strikes a more balanced compromise between accuracy, complexity, and interpretability, and shows good generalization potential across various modalities.

Topics

Journal Article

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.