Geometric-Driven Cross-Modal Registration Framework for Optical Scanning and CBCT Models in AR-Based Maxillofacial Surgical Navigation.
Authors
Abstract
Accurate preoperative planning for dental implants, especially in edentulous or partially edentulous patients, relies on precise localization of radiographic templates that guide implant positioning. By wearing a patientspecific radiographic template, clinicians can better assess anatomical constraints and plan optimal implant paths. However, due to the low radiopacity of such templates, their spatial position is difficult to determine directly from cone-beam computed tomography (CBCT) scans. To overcome this limitation, high-resolution optical scans of the templates are acquired, providing detailed geometric information for accurate spatial registration. This paper proposes a geometric-driven cross-modal registration framework that aligns the optical scan model of the radiographic template with patient CBCT data, enhancing registration accuracy through geometric feature extraction such as curvature and occlusal contours. A hybrid deep learning workflow further improves robustness, achieving a root mean square error (RMSE) of 1.68mm and mean absolute error (MAE) of 1.25mm. The system also incorporates augmented reality (AR) for real-time surgical navigation. Clinical and phantom experiments validate its effectiveness in supporting precise implant path planning and execution. Our proposed system enhances the efficiency and safety of dental implant surgery by integrating geometric feature extraction, deep learning-based registration, and AR-assisted navigation.