Back to all papers

Advanced deep learning techniques for classifying dental conditions using panoramic X-ray images.

February 7, 2026pubmed logopapers

Authors

Golkarieh A,Afjehsoleymani B,Kiashemshaki K,Boroujeni SR

Affiliations (4)

  • Department of Computer Science and Engineering, Oakland University, Rochester, MI, USA.
  • Department of Dentistry, School of Dentistry, Guilan University of Medical Sciences, Rasht, Iran. [email protected].
  • Department of Computer Science, Bowling Green State University, Bowling Green, OH, USA.
  • Department of Applied Statistics & Operations Research, Bowling Green State University, Bowling Green, OH, USA.

Abstract

This study evaluated multiple deep learning approaches for automated classification of dental conditions in panoramic radiographs, comparing custom convolutional neural networks (CNNs), hybrid CNN-machine learning models, and fine-tuned pre-trained architectures, comparing the performance of custom convolutional neural networks (CNNs), hybrid CNN-machine learning models, and fine-tuned pre-trained architectures for detecting fillings, cavities, implants, and impacted teeth. A dataset of 1,512 panoramic X-ray images with 11,137 manually annotated bounding boxes for four dental conditions (fillings, cavities, implants, and impacted teeth) was analyzed, with regions of interest extracted using expert annotations for subsequent AI-based classification. Class imbalance was addressed through random downsampling, creating a balanced dataset of 894 samples per condition. Multiple approaches were evaluated via 5-fold cross-validation: a custom CNN, hybrid models combining CNN features with traditional classifiers (Support Vector Machine, Decision Tree, Random Forest), and fine-tuned pre-trained networks (VGG16, Xception, ResNet50). Performance was assessed using accuracy, precision, recall, and F1-score metrics. The hybrid CNN-Random Forest model achieved the highest accuracy of 85.4 ± 2.3% with macro-F1 score of 0.843 ± 0.028, representing an 11% point improvement over the custom CNN (74.29% accuracy, 0.724 macro-F1). VGG16 demonstrated superior pre-trained architecture performance (82.3 ± 2.0% accuracy, 0.817 macro-F1), followed by Xception (80.9 ± 2.3%) and ResNet50 (79.5 ± 2.7%). CNN + Random Forest exhibited exceptional fillings detection (F1: 0.860 ± 0.033) with balanced multi-class performance. Systematic misclassifications between morphologically similar conditions revealed inherent diagnostic challenges. Hybrid CNN-based approaches combining feature extraction with Random Forest classification provide superior discriminative capability for dental condition detection on manually annotated regions compared to standalone architectures. While computationally efficient hybrid models show promise as supportive diagnostic tools, observed misclassification patterns indicate these AI systems should serve as adjuncts to clinical expertise, requiring prospective validation studies.

Topics

Journal Article

Ready to Sharpen Your Edge?

Subscribe to join 11k+ peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.