A YOLOv12-based approach for automatic detection of cephalometric landmarks on 2D lateral skull X-ray images.
Authors
Affiliations (3)
Affiliations (3)
- Department of Artificial Intelligence and Data Science, Faculty of Engineering and Technology, Datta Meghe Institute of Higher Education and Research (Deemed to Be University), Wardha, Maharashtra, 442001, India. [email protected].
- Department of Artificial Intelligence and Machine Learning, Faculty of Engineering and Technology, Datta Meghe Institute of Higher Education and Research (Deemed to Be University), Wardha, Maharashtra, 442001, India.
- Department of Artificial Intelligence and Data Science, Faculty of Engineering and Technology, Datta Meghe Institute of Higher Education and Research (Deemed to Be University), Wardha, Maharashtra, 442001, India.
Abstract
Cephalometric analysis is the quantitative evaluation of skeletal and soft-tissue relationships on lateral skull radiographs; it underlies diagnosis, treatment planning, and growth assessment in orthodontics. The analysis hinges on cephalometric landmarks which are anatomical reference points whose 2-D coordinates are used to derive angles, distances, and ratios that guide clinical decisions. Manual identification of these landmarks is time-consuming where each image can take from 10 to 15Â min and is subject to inter- and intra-examiner variability that can exceed 2Â mm, propagating error into subsequent measurements. In recent years, artificial intelligence methods have advanced rapidly and are now widely adopted in medical imaging. This paper proposes an automatic landmark-detection pipeline built on YOLOv12, the newest iteration of the You-Only-Look-Once family. Trained and evaluated on a publicly available cephalometric dataset, the YOLOv12 model successfully localized 53.47% of landmarks within 1Â mm and 80.57% within 2Â mm.