Multidimensional Evaluation of AI-Generated Patient Information on Cone Beam-Computed Tomography.
Authors
Affiliations (4)
Affiliations (4)
- Department of Oral and Maxillofacial Radiology, Faculty of Dentistry, Kocaeli Health and Technology University, Kocaeli, 41275, Turkey.
- Department of Pedodontics, Faculty of Dentistry, Kocaeli Health and Technology University, Kocaeli, 41275, Turkey. [email protected].
- Department of Oral and Maxillofacial Radiology, Faculty of Dentistry, Kocaeli Health and Technology University, Yeniköy Merkez, Ilıca Cd. No:29, Başiskele/Kocaeli, 41275, Turkey. [email protected].
- Faculty of Dentistry, Kocaeli Health and Technology University, Kocaeli, 41275, Turkey.
Abstract
This study aimed to conduct a multidimensional evaluation of artificial intelligence (AI) chatbot-generated patient information regarding cone beam-computed tomography (CBCT) in dentistry, with specific focus on readability, informational quality, reliability, and patient-centered suitability. Twenty frequently asked, patient-oriented questions related to CBCT were systematically identified from a public online forum. Each question was submitted to four large language model-based chatbots (ChatGPT-4o, Gemini Advanced, Claude Sonnet 4, and Microsoft Copilot) under standardized conditions. Generated responses were evaluated using validated instruments, including the DISCERN tool and Global Quality Scale (GQS) for information quality and reliability, as well as Flesch Reading Ease, Flesch-Kincaid Grade Level, and Gunning Fog Index for readability. Patient-centeredness was further assessed using PEMAT-Understandability and PEMAT-Actionability scores. Comparative analyses were performed using linear mixed-effects models. Significant differences were observed among chatbots across all evaluated domains (p < 0.05). While advanced models demonstrated higher informational quality and reliability, their responses frequently exceeded recommended health literacy thresholds. Readability, transparency, and actionability varied substantially between platforms. No chatbot consistently met all criteria for optimal patient-directed communication. AI chatbots can provide generally accurate information on CBCT; however, variability in readability, reliability, and educational suitability limits their standalone use for patient education. Careful integration with professional oversight is essential to ensure safe and accessible AI-supported communication in dentomaxillofacial radiology. This study provides the first multidimensional, comparative evaluation of leading AI chatbots in delivering patient-oriented information about cone beam-computed tomography. It shows critical gaps between informational accuracy and health literacy suitability. This study reveals the need for professional oversight when using AI for patient education in dentomaxillofacial radiology.