Evaluating Generative AI as an Educational Tool for Radiology Resident Report Drafting
Authors
Affiliations (1)
Affiliations (1)
- NYU Grossman School of Medicine
Abstract
ObjectiveRadiology residents require timely, personalized feedback to develop accurate image analysis and reporting skills. Increasing clinical workload often limits attendings ability to provide guidance. This study evaluates a HIPAA-compliant GPT-4o system that delivers automated feedback on breast imaging reports drafted by residents in real clinical settings. MethodsWe analyzed 5,000 resident-attending report pairs from routine practice at a multi-site U.S. health system. GPT-4o was prompted with clinical instructions to identify common errors and provide feedback. A reader study using 100 report pairs was conducted. Four attending radiologists and four residents independently reviewed each pair, determined whether predefined error types were present, and rated GPT-4os feedback as helpful or not. Agreement between GPT and readers was assessed using percent match. Inter-reader reliability was measured with Krippendorffs alpha. Educational value was measured as the proportion of cases rated helpful. ResultsThree common error types were identified: (1) omission or addition of key findings, (2) incorrect use or omission of technical descriptors, and (3) final assessment inconsistent with findings. GPT-4o showed strong agreement with attending consensus: 90.5%, 78.3%, and 90.4% across error types. Inter-reader reliability showed moderate variability ( = 0.767, 0.595, 0.567), and replacing a human reader with GPT-4o did not significantly affect agreement ({Delta} = -0.004 to 0.002). GPTs feedback was rated helpful in most cases: 89.8%, 83.0%, and 92.0%. DiscussionChatGPT-4o can reliably identify key educational errors. It may serve as a scalable tool to support radiology education.