Research on a deep learning-based model for measurement of X-ray imaging parameters of atlantoaxial joint.
Authors
Affiliations (9)
Affiliations (9)
- Gansu University of Traditional Chinese Medicine, Lanzhou, China. [email protected].
- Xi'an Hospital of Traditional Chinese Medicine, China, Xi'an, China. [email protected].
- Zigong First People's Hospital, Zigong, China.
- Xi'an Daxing Hospital, Xi'an, China.
- Gansu Provincial Hospital of TCM, Lanzhou, China.
- The first affiliated hospital of Gansu University of traditional Chinese medicine, Lanzhou, China.
- Hangzhou Jianpei Technology Company Ltd, Hangzhou, China.
- Gansu Provincial Hospital, Lanzhou, China.
- Gansu Provincial Hospital, Lanzhou, China. [email protected].
Abstract
To construct a deep learning-based SCNet model, in order to automatically measure X-ray imaging parameters related to atlantoaxial subluxation (AAS) in cervical open-mouth view radiographs, and the accuracy and reliability of the model were evaluated. A total of 1973 cervical open-mouth view radiographs were collected from picture archiving and communication system (PACS) of two hospitals(Hospitals A and B). Among them, 365 images of Hospital A were randomly selected as the internal test dataset for evaluating the model's performance, and the remaining 1364 images of Hospital A were used as the training dataset and validation dataset for constructing the model and tuning the model hyperparameters, respectively. The 244 images of Hospital B were used as an external test dataset to evaluate the robustness and generalizability of our model. The model identified and marked landmarks in the images for the parameters of the lateral atlanto-dental space (LADS), atlas lateral mass inclination (ALI), lateral mass width (LW), axis spinous process deviation distance (ASDD). The measured results of landmarks on the internal test dataset and external test dataset were compared with the mean values of manual measurement by three radiologists as the reference standard. Percentage of correct key-points (PCK), intra-class correlation coefficient (ICC), mean absolute error (MAE), Pearson correlation coefficient (r), mean square error (MSE), root mean square error (RMSE) and Bland-Altman plot were used to evaluate the performance of the SCNet model. (1) Within the 2 mm distance threshold, the PCK of the SCNet model predicted landmarks in internal test dataset images was 98.6-99.7%, and the PCK in the external test dataset images was 98-100%. (2) In the internal test dataset, for the parameters LADS, ALI, LW, and ASDD, there were strong correlation and consistency between the SCNet model predictions and the manual measurements (ICC = 0.80-0.96, r = 0.86-0.96, MAE = 0.47-2.39 mm/°, MSE = 0.38-8.55 mm<sup>2</sup>/°<sup>2</sup>, RMSE = 0.62-2.92 mm/°). (3) The same four parameters also showed strong correlation and consistency between SCNet and manual measurements in the external test dataset (ICC = 0.81-0.91, r = 0.82-0.91, MAE = 0.46-2.29 mm/°, MSE = 0.29-8.23mm<sup>2</sup>/°<sup>2</sup>, RMSE = 0.54-2.87 mm/°). The SCNet model constructed based on deep learning algorithm in this study can accurately identify atlantoaxial vertebral landmarks in cervical open-mouth view radiographs and automatically measure the AAS-related imaging parameters. Furthermore, the independent external test set demonstrates that the model exhibits a certain degree of robustness and generalization capability under meet radiographic standards.