Evaluating the role of LLMs in supporting patient education during the informed consent process for routine radiology procedures.
Authors
Affiliations (6)
Affiliations (6)
- Otto von Guericke University Magdeburg Medical Faculty, Clinic for Neuroradiology, Magdeburg, DE.
- University Hospital, Otto-von-Guericke-University Magdeburg, Department Radiation Protection Magdeburg, DE.
- Otto von Guericke University Magdeburg Medical Faculty, Clinic for Neuroradiology, Magdeburg, DE, Charité Universitaetsmedizin Berlin, Department of Nuclear Medicine, Berlin, DE.
- University Hospital Jena Institute of Diagnostic and Interventional Radiology, Section Neuroradiology Jena DE.
- Johannes-Wesling-Hospital, Ruhr-University-Bochum, Department of Radiology, Neuroradiology, and Nuclear Medicine Bochum DE.
- Otto von Guericke University Magdeburg Medical Faculty, Clinic for Neuroradiology, Magdeburg, DE, STIMULATE Research Campus, Magdeburg, DE.
Abstract
This study evaluated three LLM chatbots (GPT-3.5-turbo, GPT-4-turbo, and GPT-4o) on their effectiveness in supporting patient education by answering common patient questions for CT, MRI, and DSA informed consent, assessing their accuracy and clarity. Two radiologists formulated 90 questions categorized as general, clinical, or technical. Each LLM answered every question five times. Radiologists then rated the responses for medical accuracy and clarity, while medical physicists assessed technical accuracy using a Likert scale. semantic similarity was analyzed with SBERT and cosine similarity. Ratings improved with newer model versions. Linear mixed-effects models revealed that GPT-4 models were rated significantly higher than GPT-3.5 (p < 0.001) by both physicians and physicists. However, physicians' ratings for GPT-4 models showed a significant performance decrease for complex modalities like DSA and MRI (p < 0.01), a pattern not observed in physicists' ratings. SBERT analysis revealed high internal consistency across all models. SBERT analysis revealed high internal consistency across all models. Variability in ratings revealed that while models effectively handled general and technical questions, they struggled with contextually complex medical inquiries requiring personalized responses and nuanced understanding. Statistical analysis confirms that while newer models are superior, their performance is modality-dependent and perceived differently by clinical and technical experts. This study evaluates the potential of LLMs to enhance informed consent in radiology, highlighting strengths in general and technical questions while noting limitations with complex clinical inquiries, with performance varying significantly by model type and imaging modality.