Back to all papers

Dual Framework for Classification and Detection of Third Molar Impaction in Panoramic Radiographs.

February 7, 2026pubmed logopapers

Authors

Khurshid Z,Alsleem MH,Aljubairah FA,Alghafli HA,Alasafirah AO,Alibrahim BA,Alshamrani AA,Alsuhaymi AN

Affiliations (2)

  • Department of Prosthodontics and Dental Implantology, College of Dentistry, King Faisal University, Al-Ahsa, Saudi Arabia. Electronic address: [email protected].
  • Department of Prosthodontics and Dental Implantology, College of Dentistry, King Faisal University, Al-Ahsa, Saudi Arabia.

Abstract

The surgical extraction of impacted mandibular third molars present significant clinical challenges, where accurate preoperative assessment is crucial to mitigate risks such as Inferior Alveolar Nerve injury. Although artificial intelligence shows promise in dental radiology, existing approaches are often limited to binary classification, affected by class imbalance, and lack standardized evaluation protocols, thereby restricting their clinical applicability. This study proposes two independent deep learning frameworks for comprehensive analysis of third molar impactions. The first framework is an end-to-end object detection pipeline employing modified YOLOv10 and YOLOv11n architectures enhanced with multihead self-attention. The second framework is a feature-based classification approach, where deep features extracted using ResNet50 and InceptionNetV3 are classified using traditional machine learning algorithms. Validated on a multinational dataset of 5796 expertly annotated orthopantomograms with high inter-rater agreement (Îș = 0.92), the proposed frameworks demonstrated competitive performance. The Fine KNN classifier using ResNet50 features achieved the best classification performance, yielding 97.56% accuracy, 96.07% precision, 96.21% recall, and an F1-score of 96.10%, while InceptionNetV3-based classification achieved 97.33% accuracy with an F1-score of 95.30%. For object detection, YOLOv11n attained a mean average precision of 88.9% ([email protected]) and 85.7% ([email protected]:0.95), while maintaining substantially lower computational complexity (19.7 vs 28.4 GFLOPs). Ablation experiments confirmed that the integration of multihead self-attention modules and generative adversarial network-based augmentation improved detection performance by 6.4% mean average precision. The proposed frameworks enable accurate and automated multiclass assessment of third molar impactions, achieving high diagnostic performance while preserving computational efficiency suitable for clinical deployment. This work advances artificial intelligence-assisted surgical planning by providing reliable F1-score-based evaluation, reliable real-time detection, and enhanced preoperative risk stratification in oral and maxillofacial surgery.

Topics

Journal Article

Ready to Sharpen Your Edge?

Subscribe to join 11k+ peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.