Back to all papers

Comparison of Artificial Intelligence Models for Automatic Segmentation of the Mandibular Canals and Branches.

February 6, 2026pubmed logopapers

Authors

Man H,Ma S,Luo H,Wang B,Shao J,Ge S,Wang HL

Affiliations (5)

  • Department of Periodontology, School and Hospital of Stomatology, Cheeloo College of Medicine, Shandong University and Shandong Key Laboratory of Oral Tissue Regeneration and Shandong Engineering Research Center of Dental Materials and Oral Tissue Regeneration and Shandong Provincial Clinical Research Center for Oral Diseases, Jinan, Shandong, China.
  • School of Computer Science and Technology, Shandong University, Qingdao, Shandong, China.
  • Department of Periodontology, School and Hospital of Stomatology, Cheeloo College of Medicine, Shandong University and Shandong Key Laboratory of Oral Tissue Regeneration and Shandong Engineering Research Center of Dental Materials and Oral Tissue Regeneration and Shandong Provincial Clinical Research Center for Oral Diseases, Jinan, Shandong, China. Electronic address: [email protected].
  • Department of Periodontology, School and Hospital of Stomatology, Cheeloo College of Medicine, Shandong University and Shandong Key Laboratory of Oral Tissue Regeneration and Shandong Engineering Research Center of Dental Materials and Oral Tissue Regeneration and Shandong Provincial Clinical Research Center for Oral Diseases, Jinan, Shandong, China. Electronic address: [email protected].
  • Department of Periodontics and Oral Medicine, University of Michigan School of Dentistry, Ann Arbor, Michigan, USA. Electronic address: [email protected].

Abstract

This study aimed to compare and improve the performance of three deep learning models, i.e., U-Net Transformer (UNETR), Swin UNETR, and 3D UX-Net, for the segmentation of the mandibular canal and its branches. A dataset of 173 cone beam computed tomography (CBCT) scans was used for training, validation, and testing. The mandibular canals and branches were segmented manually and by the three AI models. A postprocessing module based on anatomical characteristics was then applied to improve model performance. Evaluations were conducted using Dice similarity coefficient (DSC), intersection over union (IoU), 95th Percentile Hausdorff Distance (HD95), average symmetric surface distance (ASSD), precision, and recall. All models efficiently segmented the mandibular, incisive, and mental canals, operating at least 25 times faster than manual annotation. Both 3D UX-Net and Swin UNETR consistently outperformed the UNETR network across most metrics, with 3D UX-Net demonstrating a slight performance advantage over Swin UNETR in terms of DSC, IoU, and recall. Furthermore, the anatomically-based postprocessing module significantly improved the metrics for all models. Ultimately, the 3D UX-Net with postprocessing achieved the highest accuracy, with mean values of 0.788 (DSC), 0.652 (IoU), 0.23 mm (HD95), 0.083 mm (ASSD), 72.7% (precision), and 87.0% (recall). 3D UX-Net and Swin UNETR are superior to the UNETR for segmenting small dental structures. Between the two, 3D UX-Net demonstrated statistically significant improvements in overlap and recall. Furthermore, the performances of these models can be significantly enhanced by applying postprocessing strategies based on anatomical characteristics.

Topics

Journal Article

Ready to Sharpen Your Edge?

Subscribe to join 11k+ peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.