Deep learning for automated mandibular canal segmentation in CBCT scans.
Authors
Affiliations (6)
Affiliations (6)
- Hospital of Stomatology Shantou University Medical College, Shantou, 515000, China.
- School and Hospital of Stomatology, Guangdong Engineering Research Center of Oral Restoration and Reconstruction, Guangzhou Key Laboratory of Basic and Applied Research of Oral Regenerative Medicine, Guangzhou Medical University, Guangzhou, 510182, China.
- Network & Information Center of Shantou University, Shantou, Guangdong, China.
- School and Hospital of Stomatology, Guangdong Engineering Research Center of Oral Restoration and Reconstruction, Guangzhou Key Laboratory of Basic and Applied Research of Oral Regenerative Medicine, Guangzhou Medical University, Guangzhou, 510182, China. [email protected].
- Hospital of Stomatology Shantou University Medical College, Shantou, 515000, China. [email protected].
- Department of Stomatology of Shantou University Medical College, Shantou, Guangdong, China. [email protected].
Abstract
This study aims to develop a framework for automated mandibular canal segmentation in cone beam computed tomography (CBCT) scans. The dataset, source code, and trained models are publicly accessible, allowing for reproducibility and further development by the research community. A total of 236 CBCT scans were collected from the Stomatology Hospital of the Shantou University Medical College, and the mandibular canals in these scans were manually annotated with fine granularity. A custom-designed 3D U-Net, named ManCan_ResU-Net, along with two commonly used 3D U-Net models, was employed as candidate models. The soft Dice Similarity Coefficient (DSC) loss was used as the loss function. During inference, a post-processing step involving connected components analysis and removal of small disconnected objects was applied to refine the segmentation results. Model performance was evaluated using following metrics: voxel accuracy (ACC), sensitivity (SEN), specificity (SPE), DSC, Hausdorff distance (HD), 95th percentile Hausdorff distance (HD95), average surface distance (ASD), and average symmetric surface distance (ASSD). The MCSTU dataset, which contains a development dataset (218 CBCT images) and an independent test dataset (18 CBCT images) with fine-grained annotations, has been made publicly available. The validation loss of ManCan_ResU-Net was lower than those of two commonly used models. Incorporating post-processing significantly improved model performance, particularly by reducing the HD metric. On the hold-out test dataset, the ManCan_ResU-Net model achieved ACC, SEN, SPE, DSC, HD, HD95, ASD, ASSD with 95% confidence interval of 1 (1–1), 0.86 (0.83–0.87), 1 (1–1), 0.85 (0.83–0.86), 10.1 (8.67–13.6), 1.8 (1.6–2.2), 0.69 (0.58–0.85), and 0.72 (0.6–0.83), respectively. On the test dataset, the ManCan_ResU-Net model obtained ACC, SEN, SPE, DSC, HD, HD95, ASD, ASSD with 95% confidence interval of 1 (1–1), 0.93 (0.91–0.95), 1 (1–1), 0.80 (0.79–0.81), 21.3 (11.7–53.9), 2.59 (2.33–3), 1 (0.96–1.21), and 0.92 (0.861–1), respectively. Both the code and trained models are publicly available. The proposed segmentation framework achieved strong performance on both the hold-out and independent test datasets. In the future, after further validation of the model’s generalization ability, it may be applied in real clinical settings for oral surgery planning.