Assessing the Feasibility of Deep Learning-Based Attenuation Correction Using Photon Emission Data in 18F-FDG Images for Dedicated Head and Neck PET Scanners.
Authors
Affiliations (5)
Affiliations (5)
- Department of Nuclear Medicine, Tehran University of Medical Sciences, Tehran-Iran, Tehran, 1416753955, Iran (the Islamic Republic of).
- Medical Physics and Biomedical Engineering, Tehran University of Medical Sciences, Tehran, Iran., Tehran, Tehran, 1416753955, Iran (the Islamic Republic of).
- Department of Medical Informatics, Mashhad University of Medical Sciences, Mashhad-Iran, Mashhad, Razavi Khorasan Province, 13944 91388, Iran (the Islamic Republic of).
- Vali-Asr Hospital, Tehran University of Medical Sciences, Tehran, Iran., Tehran, Tehran, 1416753955, Iran (the Islamic Republic of).
- Medical Physics and Biomedical Engineering Department, Tehran University of Medical Sciences, Tehran-Iran, Tehran, 1416753955, Iran (the Islamic Republic of).
Abstract

This study aimed to evaluate the use of deep learning techniques to produce measured attenuation-corrected (MAC) images from non-attenuation-corrected (NAC) F-FDG PET images, focusing on head and neck imaging.
Materials and Methods:
A Residual Network (ResNet) was used to train 2D head and neck PET images from 114 patients (12,068 slices) without pathology or artifacts. For validation during training and testing, 21 and 24 patient images without pathology and artifacts were used, and 12 images with pathologies were used for independent testing. Prediction accuracy was assessed using metrics such as RMSE, SSIM, PSNR, and MSE. The impact of unseen pathologies on the network was evaluated by measuring contrast and SNR in tumoral/hot regions of both reference and predicted images. Statistical significance between the contrast and SNR of reference and predicted images was assessed using a paired-sample t-test.
Results:
Two nuclear medicine physicians evaluated the predicted head and neck MAC images, finding them visually similar to reference images. In the normal test group, PSNR, SSIM, RMSE, and MSE were 44.02 ± 1.77, 0.99 ± 0.002, 0.007 ± 0.0019, and 0.000053 ± 0.000030, respectively. For the pathological test group, values were 43.14 ± 2.10, 0.99 ± 0.005, 0.0078 ± 0.0015, and 0.000063 ± 0.000026, respectively. No significant differences were found in SNR and contrast between reference and test images without pathology (p-value>0.05), but significant differences were found in pathological images (p-value <0.05)
Conclusion:
The deep learning network demonstrated the ability to directly generate head and neck MAC images that closely resembled the reference images. With additional training data, the model has the potential to be utilized in dedicated head and neck PET scanners without the requirement of computed tomography [CT] for attenuation correction.
.