LADAS: A Localization-Adaptive Dual-Branch Framework for Accurate Delineation of Teeth and Jaw Bone in Axial CBCT Slices.
Authors
Affiliations (7)
Affiliations (7)
- School of Computer Science and Software Engineering, Southwest Petroleum University, Chengdu, 610500, Sichuan, China.
- Lab of Machine Learning, Southwest Petroleum University, Chengdu, 610500, Sichuan, China.
- West China School of Stomatology, Sichuan University, Chengdu, 610041, Sichuan, China.
- Institute for Artificial Intelligence, Southwest Petroleum University, Chengdu, 610500, Sichuan, China.
- West China School of Stomatology, Sichuan University, Chengdu, 610041, Sichuan, China. [email protected].
- State Key Laboratory of Oral Diseases, Sichuan University, Chengdu, 610041, Sichuan, China. [email protected].
- National Center for Stomatology & National Clinical Research Center for Oral Diseases, Sichuan University, Chengdu, 610041, Sichuan, China. [email protected].
Abstract
Accurate delineation of teeth and jaw bone in cone-beam computed tomography (CBCT) is essential for digital dental diagnosis, treatment planning, and surgical navigation. However, reliable segmentation remains challenging due to imaging noise, blurred boundaries, and the scarcity of annotated dental datasets. Conventional methods often lose boundary precision under such image variability, whereas large-scale generic models struggle to adapt to CBCT characteristics. To address the need for precise segmentation of CBCT images, we propose localization-adaptive dual-branch accurate segmentation (LADAS), an automatic prompt segmentation framework designed to enhance the delineation of teeth and jaw bone in axial CBCT slices. The framework first localizes tooth and jaw regions and then performs refined segmentation using a dual-branch architecture that balances global structural perception and local detail preservation. Furthermore, instead of retraining the entire model, our method employs lightweight adapter modules featuring dual enhancement in both channel and spatial dimensions to transfer general visual knowledge to CBCT imaging characteristics. We annotated 332 dental CBCT scans, allocating 330 for slice-based training and validation, and reserving 2 full volumes for independent evaluation. Experimental results demonstrate that our method achieves a DSC score of 91.43%, outperforming eight existing segmentation methods, while reducing the average processing time from 203.2 min (manual) to 16.5 min (AI-assisted). These results demonstrate the potential of our method as an effective assistive tool in dentistry workflows.