Deep learning-assisted comparison of different models for predicting maxillary canine impaction on panoramic radiography.

Authors

Zhang C,Zhu H,Long H,Shi Y,Guo J,You M

Affiliations (5)

  • State Key Laboratory of Oral Diseases, National Center for Stomatology and National Clinical Research Center for Oral Diseases, Department of Oral Medical Imaging, West China Hospital of Stomatology, Sichuan University, Chengdu, Sichuan, China.
  • College of Computer Science, Sichuan University, Chengdu, Sichuan, China.
  • State Key Laboratory of Oral Diseases, National Center for Stomatology and Na tional Clinical Research Center for Oral Diseases, Department of Orthodontics, West China Hospital of Stomatology, Sichuan University, Chengdu, Sichuan, China.
  • College of Computer Science, Sichuan University, Chengdu, Sichuan, China. Electronic address: [email protected].
  • State Key Laboratory of Oral Diseases, National Center for Stomatology and National Clinical Research Center for Oral Diseases, Department of Oral Medical Imaging, West China Hospital of Stomatology, Sichuan University, Chengdu, Sichuan, China. Electronic address: [email protected].

Abstract

The panoramic radiograph is the most commonly used imaging modality for predicting maxillary canine impaction. Several prediction models have been constructed based on panoramic radiographs. This study aimed to compare the prediction accuracy of existing models in an external validation facilitated by an automatic landmark detection system based on deep learning. Patients aged 7-14 years who underwent panoramic radiographic examinations and received a diagnosis of impacted canines were included in the study. An automatic landmark localization system was employed to assist the measurement of geometric parameters on the panoramic radiographs, followed by the calculated prediction of the canine impaction. Three prediction models constructed by Arnautska, Alqerban et al, and Margot et al were evaluated. The metrics of accuracy, sensitivity, specificity, precision, and area under the receiver operating characteristic curve (AUC) were used to compare the performance of different models. A total of 102 panoramic radiographs with 102 impacted canines and 102 nonimpacted canines were analyzed in this study. The prediction outcomes indicated that the model by Margot et al achieved the highest performance, with a sensitivity of 95% and a specificity of 86% (AUC, 0.97), followed by the model by Arnautska, with a sensitivity of 93% and a specificity of 71% (AUC, 0.94). The model by Alqerban et al showed poor performance with an AUC of only 0.20. Two of the existing predictive models exhibited good diagnostic accuracy, whereas the third model demonstrated suboptimal performance. Nonetheless, even the most effective model is constrained by several limitations, such as logical and computational challenges, which necessitate further refinement.

Topics

Journal Article

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.