Back to all papers

Hierarchical attention mechanism combined with deep neural networks for accurate semantic segmentation of dental structures in panoramic radiographs.

November 5, 2025pubmed logopapers

Authors

Esmaeili M,Dalili Z,Sadr H,Mousavie A,Faghihi A,Saei R,Nazari M

Affiliations (6)

  • Dental Sciences Research Center, School of Dentistry, Guilan University of Medical Sciences, Rasht, Iran.
  • Dental Sciences Research Center, Department of Oral and Maxillofacial Radiology, School of Dentistry, Guilan University of Medical Sciences, Rasht, Iran.
  • Neuroscience Research Center, Trauma Institute, Guilan University of Medical Sciences, Rasht, Iran. [email protected].
  • Department of Artificial Intelligence in Medicine, Faculty of Advanced Technologies in Medicine, Iran University of Medical Sciences, Tehran, Iran. [email protected].
  • Department of Health Information Technology and Management, School of Allied Medical Sciences, Shahid Beheshti University of Medical Sciences, Tehran, Iran. [email protected].
  • Cardiovascular Diseases Research Center, Department of Cardiology, School of Medicine, Heshmat Hospital, Guilan University of Medical Sciences, Rasht, Iran. [email protected].

Abstract

Computer vision, a rapidly advancing branch of artificial intelligence (AI), has gained significant attention in medical and dental applications. Semantic segmentation, a key technique within computer vision, enables the precise identification and delineation of objects at the pixel level, offering transformative potential for diagnostic imaging in dentistry. Panoramic radiographs are essential for diagnosing oral and maxillofacial conditions, yet their interpretation remains time-consuming and prone to human error, particularly in complex cases. This study evaluates the performance of a deep learning-based semantic segmentation model designed to identify and classify 24 distinct anatomical and pathological structures in panoramic radiographs. A dataset of 844 annotated panoramic images was collected from multiple radiography centers and used for training and testing. The model employs a hierarchical multi-scale attention mechanism to enhance accuracy by analyzing images at varying resolutions. Performance was assessed using key metrics, including specificity, accuracy, precision, recall, F1 score, and Intersection over Union (IoU). The proposed model demonstrated robust performance, achieving an overall accuracy of 98.73%, specificity of 98.86%, IoU value of 78.76%, precision of 86.97%, recall of 86.97%, and an F1 score of 84.54%. Notably, structures such as implants and amalgam restorations were identified with high reliability, while challenges persisted in detecting dental pulp and caries due to overlapping structures and subtle anatomical details. The deep neural network developed in this study exhibits significant potential for aiding dental professionals in accurately segmenting and identifying anatomical features in panoramic radiographs. While limitations exist in detecting specific intricate structures, the model's performance underscores the value of AI-driven tools in enhancing diagnostic accuracy and treatment planning in dentistry. Future work may explore complementary imaging modalities to address the remaining challenges.

Topics

Radiography, PanoramicDeep LearningNeural Networks, ComputerImage Processing, Computer-AssistedToothJournal Article

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.