Evaluation of a no-code AI model for detecting periapical radiolucencies: impact of anatomical region on diagnostic performance.
Authors
Affiliations (7)
Affiliations (7)
- Department of Surgical and Diagnostic Sciences, Marquette University School of Dentistry, 1801 W Wisconsin Ave, Milwaukee, WI, 53233, USA. [email protected].
- Department of Diagnostic Sciences, Adams School of Dentistry, University of North Carolina, Chapel Hill, NC, USA. [email protected].
- Department of Surgical and Diagnostic Sciences, Marquette University School of Dentistry, 1801 W Wisconsin Ave, Milwaukee, WI, 53233, USA.
- Technological Innovation Center - Department of General Dental Sciences, Marquette University School of Dentistry, Milwaukee, WI, USA.
- Department of Conservative Dentistry and Oral Health, Riga Stradins University, Riga, 1007, Latvia.
- Baltic Biomaterials Centre of Excellence, Headquarters at Riga Technical University & RSU Institute of Stomatology, Riga, Latvia.
- Department of Conservative Dentistry, Periodontology and Digital Dentistry, LMU Hospital, LMU, Munich, Germany.
Abstract
To develop a no-code artificial intelligence (AI) model for the detection of apical radiolucent lesions and to assess how lesion location influences the model's diagnostic performance. 312 periapical radiographs were retrospectively collected, each accompanied by a corresponding cone-beam computed tomography (CBCT) scan obtained within six months and a radiology report authored by board-certified oral and maxillofacial radiologists. These reports served as the reference standard, and all findings were cross-verified by the primary investigator through CBCT review. The dataset included 181 images with at least one apical radiolucent lesion and 131 lesion-free controls. Using the no-code AI platform LandingLens (Landing AI LLC, Palo Alto, CA), a diagnostic model was developed. De-identified radiographs were manually annotated with bounding boxes around lesions and automatically divided into training (70%), validation (20%), and testing (10%) subsets. Model performance was evaluated at the tooth level using sensitivity, specificity, accuracy, precision, and F1 score. To investigate the influence of anatomical sites, data were stratified by region and analyzed using chi-square and pairwise chi-square tests (α = 0.05). The AI model demonstrated robust diagnostic performance, achieving 88.7% sensitivity, 93.0% specificity, 76.6% precision, 92.1% accuracy, and an F1 score of 0.822. Sensitivity was significantly higher in the maxilla (93.1%) compared to the mandible (83.8%; p = 0.049), whereas specificity was greater in the mandible (95.4%) than in the maxilla (90.8%; p = 0.01). No differences in performance were found between anterior and posterior teeth (p > 0.05). Regionally, the highest sensitivity was observed in the anterior maxilla (97.7%), while specificity peaked in the anterior mandible (97.9%; p < 0.05). Other diagnostic metrics did not vary significantly across regions (p > 0.05). This no-code AI model demonstrated high accuracy in detecting apical radiolucent lesions and revealed that diagnostic performance can vary based on anatomical location. The model's increased sensitivity in the maxilla and higher specificity in the mandible highlight the relevance of regional anatomy in AI-assisted diagnostics. These findings support the potential of no-code AI platforms as accessible tools for building clinically meaningful diagnostic models and underscore the importance of anatomical considerations in their implementation.