Back to all papers

Exploring AI-driven deep learning approaches for optimizing space detection in single gap implantation based on CBCT images.

March 13, 2026pubmed logopapers

Authors

Anupuntanun P,Arunjaroensuk S,Narkbuakaew W,Thongvigitmanee S,Sinpitaksakul P,Pimkhaokham A

Affiliations (4)

  • Department of Oral and Maxillofacial Surgery, Faculty of Dentistry, Chulalongkorn University, Bangkok, Thailand; Oral and Maxillofacial Surgery and Digital Implant Surgery Research Unit, Faculty of Dentistry, Chulalongkorn University, Bangkok, Thailand.
  • Medical Imaging System Research Team, Assistive Technology and Medical Device Research Group, National Electronics and Computer Technology Center, Pathum Thani, Thailand.
  • Department of Radiology, Faculty of Dentistry, Chulalongkorn University, Bangkok, Thailand.
  • Department of Oral and Maxillofacial Surgery, Faculty of Dentistry, Chulalongkorn University, Bangkok, Thailand; Oral and Maxillofacial Surgery and Digital Implant Surgery Research Unit, Faculty of Dentistry, Chulalongkorn University, Bangkok, Thailand. Electronic address: [email protected].

Abstract

Computer-assisted implant surgery (CAIS) is a valuable tool for improving implantation accuracy and efficiency, but its clinical implementation remains time consuming and relies on expert experience, particularly for anatomical structure annotation. While AI-driven analysis offers a solution, limited studies have assessed its use for evaluating single edentulous regions. To address this gap, this study introduces a novel, simple, three-step sequential approach for automated detection of single edentulous areas and adjacent structures, leveraging structure segmentation from the nnUNet-based DentalSegmentator framework without requiring further manual segmentation. Sixty-six cone-beam computed tomography (CBCT) scans, acquired from four different machines and encompassing a total of 80 single edentulous regions, were divided into parameter tuning and validation cohorts. Anatomical structures were first segmented using the DentalSegmentator with the available pre-trained model. Subsequently, detection was computed using morphological image processing, the Watershed algorithm, particle analysis, and a decision tree. Performance was validated against the annotations of three experienced dentists using accuracy, precision, recall, specificity and F1-score. Finally, computational time was analyzed using the Wilcoxon signed-rank test (p < 0.001). The approach achieved high performance, with accuracy, precision, recall, specificity, and F1-score values of 0.95, 0.95, 0.96, 0.93, and 0.96, respectively. The automated detection time (3.3 s) represents a 37-fold reduction in processing time compared to experts (120 s). The proposed automated detection approach demonstrated reliability, time efficiency, and consistency, positioning it as a valuable asset for pre-surgical implant planning.

Topics

Journal Article

Ready to Sharpen Your Edge?

Subscribe to join 11k+ peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.