Deep learning-based real-time detection of head and neck tumors during radiation therapy.
Authors
Affiliations (5)
Affiliations (5)
- Image X Institute, The University of Sydney, 1 Central Avenue, Eveleigh, New South Wales, 2006, AUSTRALIA.
- Image X Institute, The University of Sydney, 1 Central Ave, Eveleigh, New South Wales, 2015, AUSTRALIA.
- Blacktown Cancer & Haematology Centre, Blacktown Hospital, Blacktown Rd, Blacktown, NSW 2148, Blacktown, New South Wales, 2148, AUSTRALIA.
- Radiation Oncology Network, Western Sydney Local Health District, Radiation Oncology Network, Wentworthville, NSW 2148, Wentworthville, New South Wales, 2145, AUSTRALIA.
- Image X Institute, The University of Sydney, 1 Central Avenue, The University of Sydney, NSW 2015, Sydney, New South Wales, 2015, AUSTRALIA.
Abstract

Clinical drivers for real-time head and neck (H&N) tumor tracking during radiation therapy (RT) are accounting for motion caused by changes to the immobilization mask fit, and to reduce mask-related patient distress by replacing the masks with patient motion management methods. The purpose of this paper is to investigate a deep learning-based method to segment H&N tumors in patient kilovoltage (kV) x-ray images to enable real-time H&N tumor tracking during RT.
Approach: An ethics-approved clinical study collected data from 17 H&N cancer patients undergoing conventional H&N RT. For each patient, personalized conditional Generative Adversarial Networks (cGANs) were trained to segment H&N tumors in kV x-ray images. Network training data were derived from each patient's planning CT and contoured gross tumor volumes (GTV). For each training epoch, the planning CT and GTV were deformed and forward projected to create the training dataset. The testing data consisted of kV x-ray images used to reconstruct the pre-treatment CBCT volume for the first, middle and end fractions. The ground truth tumor locations were derived by deformably registering the planning CT to the pre-treatment CBCT and then deforming the GTV and forward projecting the deformed GTV. The generated cGAN segmentations were compared to ground truth tumor segmentations using the absolute magnitude of the centroid error and the mean surface distance (MSD) metrics.
Main Results:
The centroid error for the nasopharynx (n=4), oropharynx (n=9) and larynx (n=4) patients was 1.5±0.9mm, 2.4±1.6mm, 3.5±2.2mm respectively and the MSD was 1.5±0.3mm, 1.9±0.9mm and 2.3±1.0mm respectively. There was a weak correlation between the centroid error and the LR tumor location (r=0.41), which was higher for oropharynx patients (r=0.77).
Significance: The paper reports on markerless H&N tumor detection accuracy using kV images. Accurate tracking of H&N tumors can enable more precise RT leading to mask-free RT enabling better patient outcomes.
.