Transfer Learning From Micro-CT to Periapical Radiographs for Three-Dimensional Root Canal Morphological Identification.
Authors
Affiliations (5)
Affiliations (5)
- Department of Stomatology, Tongji Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China.
- School of Stomatology, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China.
- School of Artificial Intelligence and Automation, Huazhong University of Science and Technology, Wuhan, China.
- State Key Laboratory of Oral Diseases & National Center for Stomatology & National Clinical Research Center for Oral Diseases, Department of Cariology and Endodontics, West China Hospital of Stomatology, Sichuan University, Chengdu, Sichuan, China.
- Dental College of Georgia, Augusta University, Augusta, Georgia, USA.
Abstract
This study investigated the transfer of implicit anatomical features from micro-CT to periapical radiographs using fused-rooted mandibular second molars (MSMs) as a model. The objective was to evaluate the feasibility and effectiveness of multimodal transfer learning for the three-dimensional (3D) morphological identification of root canals, and to examine how task complexity influences transfer performance. Fused-rooted MSMs were scanned using high-resolution micro-CT to generate virtual radiographs. Clinically simulated periapical radiographs (CSPRs) were obtained from ex vivo mandibles to reproduce realistic clinical conditions. Based on micro-CT classification, root canals were divided into merging, symmetrical and asymmetrical types. Four convolutional neural network (CNN) architectures (VGG19, ResNet18, ResNet50 and EfficientNet-b5) were trained under three conditions: (1) CSPRs with ImageNet-pretrained CNNs, (2) virtual radiographs with ImageNet-pretrained CNNs, and (3) CSPRs with CNNs pretrained on virtual radiographs. Grad-CAM visualisation was used to interpret model attention, and results were compared with those of four endodontic residents. To reduce task complexity, symmetrical and asymmetrical types were later merged into a "separating" group to generate a two-class classification task. In the three-class task, CNNs pretrained on virtual radiographs achieved an average accuracy of 69.68% (95% CI: 64.61%-74.76%), significantly higher than ImageNet-pretrained models (64.36%, 95% CI: 61.12%-67.61%) and endodontic residents (61.17%, 95% CI: 56.09%-66.25%) (p < 0.05). Grad-CAM visualisation revealed that virtual radiograph-pretrained models concentrated attention on root structures, whereas ImageNet-pretrained networks showed diffuse or misplaced focus. In the two-class task, accuracies were 79.79% (95% CI: 73.30%-86.27%) for CNNs pretrained on virtual radiographs, 73.41% (95% CI: 67.54%-79.27%) for ImageNet-pretrained models and 76.60% (95% CI: 69.28%-83.91%) for residents, with no significant differences (p > 0.05). The overall diagnostic balance improved following transfer learning, indicating better feature representation across classes. Implicit 3D features extracted from micro-CT-based virtual radiographs can be effectively transferred to CSPRs through transfer learning. This approach enhances CNN interpretability and diagnostic precision in identifying root canal morphology. The benefits of transfer learning are greater for complex, multi-class tasks that require the extraction of intricate morphological features, whereas its effect diminishes in simplified binary classifications. These findings provide a theoretical and experimental foundation for applying multimodal transfer learning to clinical dental imaging.