Back to all papers

Evaluation of panoramic radiography for artificial intelligence-based assessment of impacted maxillary canines using cone-beam computed tomography as reference.

April 30, 2026pubmed logopapers

Authors

Aydin Gerzeli E,Nusari AN,Miloglu O,Ozbek IY

Affiliations (3)

  • Department of Oral, Dental and Maxillofacial Radiology, Faculty of Dentistry, Bingol University, Bingol, 15020, Turkey.
  • Department of Electrical and Electronics Engineering, Faculty of Engineering, Ataturk University, Yakutiye, Erzurum, 25240, Turkey.
  • Department of Oral, Dental and Maxillofacial Radiology, Faculty of Dentistry, Ataturk University, Yakutiye, Erzurum, 25240, Turkey. [email protected].

Abstract

To develop and evaluate a deep learning (DL)-based artificial intelligence (AI) framework for the comprehensive assessment of impacted maxillary canines, including positional classification, buccal-palatal localization, and detection of adjacent tooth root resorption, using panoramic radiographs (PRs) validated against cone beam computed tomography (CBCT) reference standards. This retrospective study included PRs and CBCT scans of 458 patients (581 impacted maxillary canines) acquired between 2020 and 2024. CBCT images served as the reference standard for buccal-palatal localization and root resorption, while canine position was determined based on beta angle measurements on PRs. Regions of interest (ROIs) were manually annotated on PRs, and cropped PR images were used as model inputs. CBCT images were used exclusively as the reference standard for labeling. A total of twelve pretrained convolutional neural network (CNN) architectures were evaluated across three diagnostic tasks. For each specific task, the top six performing architectures were selected to form a majority voting-based fusion model to enhance diagnostic accuracy. Model performance was assessed using accuracy, precision, recall, and F1-score derived from confusion matrix analyses. For three-class positional classification (horizontal, mesioangular, vertical), the best individual CNN achieved an accuracy of 88.61%, while the fusion model improved accuracy to 90.90%. In buccal-palatal localization, individual model accuracy reached 74.85%, increasing to 80.40% with fusion. For detection of adjacent tooth root resorption, the highest individual accuracy was 83.02%, and the fusion model achieved an accuracy of 88.59%. DL models can accurately evaluate the position, buccal-palatal localization, and root resorption associated with impacted maxillary canines when trained on ROIs extracted from PRs using CBCT-based annotations. The use of multiple CNN architectures combined with a fusion strategy significantly enhances diagnostic performance. These findings suggest that AI-based automated analysis may serve as a reliable complementary tool for improving diagnostic consistency and treatment planning in patients with impacted maxillary canines.

Topics

Journal Article

Ready to Sharpen Your Edge?

Subscribe to join 11k+ peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.