Back to all papers

Deep learning-based segmentation of enamel, cementum, alveolar bone, and gingiva in periodontal ultrasound images.

April 18, 2026pubmed logopapers

Authors

Piao JZ,Hu KS,Jung HS,Kim HJ

Affiliations (4)

  • Division in Anatomy and Developmental Biology, Department of Oral Biology, Human Identification Research Institute, BK21 FOUR Project, Yonsei University College of Dentistry, Seoul, 03722, Korea. Electronic address: [email protected].
  • Division in Anatomy and Developmental Biology, Department of Oral Biology, Human Identification Research Institute, BK21 FOUR Project, Yonsei University College of Dentistry, Seoul, 03722, Korea. Electronic address: [email protected].
  • Division in Anatomy and Developmental Biology, Department of Oral Biology, Taste Research Center, Oral Science Research Center, BK21 FOUR Project, Yonsei University College of Dentistry, Seoul, 03722, Korea. Electronic address: [email protected].
  • Division in Anatomy and Developmental Biology, Department of Oral Biology, Human Identification Research Institute, BK21 FOUR Project, Yonsei University College of Dentistry, Seoul, 03722, Korea; Department of Electric and Electronical Engineering, College of Engineering, Yonsei University, Seoul, 03722, Korea. Electronic address: [email protected].

Abstract

To develop a deep learning-based multi-class segmentation model for the simultaneous segmentation of key periodontal structures, including enamel, cementum, alveolar bone, and gingiva, in ultrasound images, and to enable precise localization of the cementoenamel junction (CEJ), alveolar bone crest (ABC), and gingival margin (GM). A novel dual-stream deep learning architecture featuring stochastic block shuffling was proposed. The model was trained for simultaneous four-class segmentation on an internal dataset of 752 images and validated on an external test set of 111 images. The resulting segmentation masks were subsequently used to identify three anatomical landmarks: the CEJ, ABC, and GM. The model demonstrated strong segmentation performance, with median Dice similarity coefficient, intersection over union, precision, sensitivity, 95% Hausdorff distance, and average symmetric surface distance values of 0.891, 0.805, 0.887, 0.909, 0.083 mm, and 0.028 mm, respectively, for the internal set, and 0.841, 0.728, 0.781, 0.921, 0.089 mm, and 0.032 mm, respectively, for the external set. In the assessment of landmark localization accuracy, the model achieved median distance errors of 0.06 mm, 0.08 mm, and 0.06 mm for the CEJ, ABC, and GM, respectively. The proposed deep learning model enabled accurate automated multi-class segmentation of periodontal structures in ultrasound images and facilitated highly precise localization of anatomical landmarks derived from the segmentation masks. The proposed automatic multi-class segmentation model may assist dental clinicians in visualizing and interpreting periodontal ultrasound images. This approach shows promise for supporting broader clinical adoption of ultrasonography for the evaluation of periodontal conditions and preoperative digital planning, including periodontal disease management, restorative treatment, and orthodontic care.

Topics

Journal Article

Ready to Sharpen Your Edge?

Subscribe to join 11k+ peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.