Back to all papers

A Deep Learning Model for Second-Molar Lesions Related to Impacted Third Molars.

February 28, 2026pubmed logopapers

Authors

Jiang Y,Jin J,Gao Y,Tian Y,Hu L,Lu Y,Fu Y

Affiliations (4)

  • School of Stomatology, Nanjing Medical University, Nanjing, Jiangsu, China.
  • Department of Oral and Maxillofacial Surgery, The Affiliated Stomatology Hospital of Nanjing Medical University, Nanjing, Jiangsu, China; State Key Laboratory Cultivation Base of Research, Prevention and Treatment for Oral Diseases, Nanjing, Jiangsu, China; Jiangsu Province Engineering Research Center of Stomatological Translational Medicine, Nanjing, Jiangsu, China.
  • The Second School of Clinical Medicine, Nanjing Medical University, Nanjing, Jiangsu, China.
  • Department of Oral and Maxillofacial Surgery, The Affiliated Stomatology Hospital of Nanjing Medical University, Nanjing, Jiangsu, China; State Key Laboratory Cultivation Base of Research, Prevention and Treatment for Oral Diseases, Nanjing, Jiangsu, China; Jiangsu Province Engineering Research Center of Stomatological Translational Medicine, Nanjing, Jiangsu, China. Electronic address: [email protected].

Abstract

This study aimed to develop an automated deep learning (DL) system to detect and classify second molar (M2) pathologies associated with impacted third molars (ITMs) on panoramic radiographs. We sought to enhance diagnostic accuracy and support informed clinical decision-making. We constructed a dataset of 1,170 panoramic radiographs that show second molars (M2s) adjacent to impacted third molars (ITMs). The cases were retrospectively divided into 4 groups: (1) no lesions; (2) caries (tooth decay); (3) external root resorption (ERR; loss of tooth root structure from external factors); or (4) both pathologies. Three oral surgeons, each with extensive experience, annotated the images using standardized criteria, resolving any disagreements by consensus. Our enhanced SMM-YOLOv8n model is based on You Only Look Once version 8 (YOLOv8) and features Slim-Neck optimization and multidimensional attention mechanisms. We trained the model with transfer learning (applying knowledge from pre-trained models to new tasks). We evaluated it using 5-fold cross-validation (dividing the data into 5 parts and rotating the validation sets). SMM-YOLOv8n achieved an mAP@50 of 0.886 on the internal test set. The macro-averaged precision, recall, and F1-score were 0.894, 0.96, and 0.926 (calculated at an IoU threshold of 0.5). These results show clear improvement over the baseline YOLOv8. For 60 images, the clinicians' mean sensitivity increased by 0.171, and the average interpretation time decreased by 8.79 minutes. SMM-YOLOv8n offers accurate and efficient detection of ITM-related M2 pathologies on panoramic radiographs. This approach may enhance early diagnosis, aid in treatment planning, and reduce the need for cone-beam computed tomography in initial assessments. This DL-based diagnostic tool may serve as a valuable decision-support system, particularly in clinical settings with limited access to specialized dental expertise.

Topics

Journal Article

Ready to Sharpen Your Edge?

Subscribe to join 11k+ peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.