Evaluating Generative AI as an Educational Tool for Radiology Resident Report Drafting.
Authors
Affiliations (3)
Affiliations (3)
- New York University Grossman School of Medicine, Department of Radiology.
- New York University Grossman School of Medicine.
- New York University Grossman School of Medicine, Department of Radiology; New York University, Center for Data Science. Electronic address: [email protected].
Abstract
Radiology residents require timely, personalized feedback to develop accurate image analysis and reporting skills. Increasing clinical workload often limits attendings' ability to provide guidance. This study evaluates a HIPAA-compliant GPT-4o system that delivers automated feedback on breast imaging reports drafted by residents in real clinical settings. We analyzed 5,000 resident-attending report pairs from routine practice at a multi-site U.S. health system. GPT-4o was prompted with clinical instructions to identify common errors and provide feedback. A reader study using 100 report pairs was conducted. Four attending radiologists and four residents independently reviewed each pair, determined whether predefined error types were present, and rated GPT-4o's feedback as helpful or not. Agreement between GPT and readers was assessed using percent match. Inter-reader reliability was measured with Krippendorff's alpha. Educational value was measured as the proportion of cases rated helpful. Three common error types were identified: (1) omission or addition of key findings, (2) incorrect use or omission of technical descriptors, and (3) final assessment inconsistent with findings. GPT-4o showed strong agreement with attending consensus: 90.5%, 78.3%, and 90.4% (Cohen's κ: 0.790, 0.550, and 0.615) across error types. Inter-reader reliability among all eight readers showed moderate to substantial variability (α = 0.767, 0.595, 0.567). When each reader was individually replaced with GPT-4o and inter-reader agreement among seven readers and GPT was recalculated, the effect was not statistically significant (Δ = -0.004 to 0.002, all p > 0.05). GPT's feedback was rated helpful in most cases: 89.8%, 83.0%, and 92.0%. ChatGPT-4o can reliably identify key educational errors. It may serve as a scalable tool to support radiology education.