Deep learning-based 3D automatic segmentation of impacted canines in CBCT scans.
Authors
Affiliations (10)
Affiliations (10)
- Alanya Oral and Dental Health Center, Antalya, Turkey.
- Department of Oral and Maxillofacial Radiology, Faculty of Dentistry, Kocaeli University, Kocaeli, Turkey.
- Department of Oral and Maxillofacial Radiology, Faculty of Dentistry, Alanya Alaaddin Keykubat University, Antalya, Turkey.
- Department of Pediatric Dentistry, Faculty of Dentistry, Inonu University, Malatya, Turkey. [email protected].
- Department of Pediatric Dentistry, Faculty of Dentistry, Istanbul Medeniyet University, Istanbul, Turkey.
- Eskişehir Oral and Dental Health Center, Eskisehir, Turkey.
- Department of Mathematics-Computer, Faculty of Science, Eskisehir Osmangazi University, Eskisehir, Turkey.
- Department of Orthodontics, Faculty of Dentistry, Eskisehir Osmangazi University, Eskisehir, Turkey.
- Department of Oral and Maxillofacial Radiology, Faculty of Dentistry, Eskisehir Osmangazi University, Eskisehir, Turkey.
- Department of Oral and Maxillofacial Radiology, Faculty of Dentistry, Ankara University, Ankara, Turkey.
Abstract
Impacted canines are one of the most frequently encountered dental anomalies in maxillofacial practice. Accurate localization of these teeth is crucial for treatment planning, and Cone Beam Computed Tomography (CBCT) offers detailed 3D imaging for this purpose. However, manual segmentation on CBCT scans is time-consuming and subject to inter-observer variability. This study aimed to develop a deep learning model based on nnU-Net v2 for the automatic segmentation of impacted canines and to evaluate its performance using both classification and segmentation metrics. A total of 159 CBCT scans containing impacted canines were retrospectively collected and annotated using web-based segmentation software. Model training was performed using the nnU-Net v2 architecture with a learning rate of 0.00001 for 1000 epochs. The performance of the model was evaluated using recall and precision. In addition, segmentation performance was assessed using Dice Similarity Coefficient (DSC), 95% Hausdorff Distance (95% HD in mm), and Intersection over Union (IoU). The nnU-Net v2 model achieved high performance in the detection and segmentation of impacted canines. The values obtained for recall and precision 0.90 and 0.82, respectively. The segmentation metrics were also favorable, with a DSC of 0.84, 95% HD of 7.07 mm, and IoU of 0.74, indicating good overlap between predicted and reference segmentations. The results suggest that the nnU-Net v2-based deep learning model can effectively and autonomously segment impacted canines in CBCT volumes. Its strong performance highlights the potential of artificial intelligence to improve diagnostic efficiency in dentomaxillofacial radiology.