U<sub>2</sub>-Attention-Net: a deep learning automatic delineation model for parotid glands in head and neck cancer organs at risk on radiotherapy localization computed tomography images.

Authors

Wen X,Wang Y,Zhang D,Xiu Y,Sun L,Zhao B,Liu T,Zhang X,Fan J,Xu J,An T,Li W,Yang Y,Xing D

Affiliations (7)

  • School of Pharmacy, Qingdao University, Qingdao, China; The Affiliated Hospital of Qingdao University, Qingdao University, Qingdao, China; Qingdao Cancer Institute, Qingdao University, Qingdao, China.
  • The Affiliated Hospital of Qingdao University, Qingdao University, Qingdao, China; Qingdao Cancer Institute, Qingdao University, Qingdao, China.
  • Medical College of Qingdao University, Qingdao University, Qingdao, China.
  • The Affiliated Hospital of Qingdao University, Qingdao University, Qingdao, China; Qingdao Cancer Institute, Qingdao University, Qingdao, China; School of Basic Medicine, Qingdao University, Qingdao, China.
  • Department of Radiotherapy, Yunnan Cancer Hospital, the Third Affiliated Hospital of Kunming Medical University, Kunming, Yunnan, China.
  • Department of Radiotherapy, Yunnan Cancer Hospital, the Third Affiliated Hospital of Kunming Medical University, Kunming, Yunnan, China. Electronic address: [email protected].
  • The Affiliated Hospital of Qingdao University, Qingdao University, Qingdao, China; Qingdao Cancer Institute, Qingdao University, Qingdao, China; School of Basic Medicine, Qingdao University, Qingdao, China; School of Life Sciences, Tsinghua University, Beijing, China. Electronic address: [email protected].

Abstract

This study aimed to develop a novel deep learning model, U<sub>2</sub>-Attention-Net (U<sub>2</sub>A-Net), for precise segmentation of parotid glands on radiotherapy localization CT images. CT images from 79 patients with head and neck cancer were selected, on which the label maps were delineated by relevant practitioners to construct a dataset. The dataset was divided into the training set (n = 60), validation set (n = 6), and test set (n = 13), with the training set augmented. U<sub>2</sub>A-Net, divided into U<sub>2</sub>A-Net V<sub>1</sub> (sSE) and U<sub>2</sub>A-Net V<sub>2</sub> (cSE) based on different attention mechanisms, was evaluated for parotid gland segmentation based on the DL loss function with U-Net, Attention U-Net, DeepLabV3+, and TransUNet as comparision models. Segmentation was also performed using GDL and GD-BCEL loss functions. Model performance was evaluated using DSC, JSC, PPV, SE, HD, RVD, and VOE metrics. The quantitative results revealed that U<sub>2</sub>A-Net based on DL outperformed the comparative models. While U<sub>2</sub>A-Net V<sub>1</sub> had the highest PPV, U<sub>2</sub>A-Net V<sub>2</sub> demonstrated the best quantitative results in other metrics. Qualitative results showed that U<sub>2</sub>A-Net's segmentation closely matched expert delineations, reducing oversegmentation and undersegmentation, with U<sub>2</sub>A-Net V<sub>2</sub> being more effective. In comparing loss functions, U<sub>2</sub>A-Net V<sub>1</sub> using GD-BCEL and U<sub>2</sub>A-Net V<sub>2</sub> using DL performed best. The U<sub>2</sub>A-Net model significantly improved parotid gland segmentation on radiotherapy localization CT images. The cSE attention mechanism showed advantages with DL, while sSE performed better with GD-BCEL.

Topics

Journal Article
Get Started

Upload your X-ray image and get interpretation.

Upload now →

Disclaimer: X-ray Interpreter's AI-generated results are for informational purposes only and not a substitute for professional medical advice. Always consult a healthcare professional for medical diagnosis and treatment.