Back to all papers

Predicting outcomes in head and neck cancer using CT images via transfer learning.

Authors

Zhao W,Huang X,Xu L

Affiliations (2)

  • Department of Radiology, The First Affiliated Hospital of Xi'an Jiaotong University, No.277 Yanta West Road, Xi'an, Shaanxi, 710061, China.
  • Department of Radiation Oncology, The First Affiliated Hospital of Xi'an Jiaotong University, No.277 Yanta West Road, Xi'an, Shaanxi, 710061, China. [email protected].

Abstract

Accurate preoperative risk stratification for patients with head and neck (H&N) cancer remained a critical challenge, as long-term survival rates are poor despite aggressive multimodality treatment. While deep learning models showed promise for outcome prediction from medical images, their typical requirement for massive datasets presented a significant barrier to development and clinical translation. To overcome this limitation, we developed a transfer learning-based framework to accurately predict key treatment outcomes, locoregional recurrence (LR), distant metastasis (DM), and overall survival (OS), from non-invasive computed tomography (CT) images. Our framework, OPHN-Net, utilized a VGG16 architecture pre-trained on ImageNet. The framework was trained and validated using a public dataset from The Cancer Imaging Archive, which comprises CT images and clinical data for 296 patients from four independent institutions. To overcome data limitations and class imbalance, we implemented a novel random-plane view resampling method for data augmentation. The network was trained and validated on data from two institutions and then independently tested on a cohort from the remaining two. Finally, we constructed an integrated model by combining the predictions from our imaging-based model with key clinical characteristics to further enhance performance. On the independent test cohort, our OPHN-Net framework substantially outperformed both traditional radiomics and a previously published deep learning model across all endpoints. The model achieved AUCs of 0.84 (95% CI, 0.75-0.90) for LR, 0.89 (95% CI, 0.82-0.95) for DM, and 0.79 (95% CI, 0.70-0.87) for OS. Furthermore, integrating clinical characteristics with the imaging-based predictions yielded a final model with even greater performance, boosting the AUCs to 0.87 (95% CI, 0.80-0.93) for LR, 0.91 (95% CI, 0.83-0.95) for DM, and 0.86 (95% CI, 0.78-0.92) for OS. Our transfer learning-based framework, OPHN-Net, provided a robust and data-efficient method for predicting treatment outcomes in H&N cancer from non-invasive CT images. The integration of imaging-based predictions with clinical characteristics created a more comprehensive prognostic model. This approach had the potential to facilitate personalized treatment stratification, ultimately leading to improved clinical decision-making and patient outcomes.

Topics

Head and Neck NeoplasmsTomography, X-Ray ComputedDeep LearningJournal Article

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.