Back to all papers

Evaluating Generative AI as an Educational Tool for Radiology Resident Report Drafting.

December 24, 2025pubmed logopapers

Authors

Verdone A,Cardall A,Siddiqui F,Nashawaty M,Rigau D,Kwon Y,Yousef M,Patel S,Kieturakis A,Kim E,Heacock L,Reig B,Shen Y

Affiliations (3)

  • New York University Grossman School of Medicine, Department of Radiology.
  • New York University Grossman School of Medicine.
  • New York University Grossman School of Medicine, Department of Radiology; New York University, Center for Data Science. Electronic address: [email protected].

Abstract

Radiology residents require timely, personalized feedback to develop accurate image analysis and reporting skills. Increasing clinical workload often limits attendings' ability to provide guidance. This study evaluates a HIPAA-compliant GPT-4o system that delivers automated feedback on breast imaging reports drafted by residents in real clinical settings. We analyzed 5,000 resident-attending report pairs from routine practice at a multi-site U.S. health system. GPT-4o was prompted with clinical instructions to identify common errors and provide feedback. A reader study using 100 report pairs was conducted. Four attending radiologists and four residents independently reviewed each pair, determined whether predefined error types were present, and rated GPT-4o's feedback as helpful or not. Agreement between GPT and readers was assessed using percent match. Inter-reader reliability was measured with Krippendorff's alpha. Educational value was measured as the proportion of cases rated helpful. Three common error types were identified: (1) omission or addition of key findings, (2) incorrect use or omission of technical descriptors, and (3) final assessment inconsistent with findings. GPT-4o showed strong agreement with attending consensus: 90.5%, 78.3%, and 90.4% (Cohen's κ: 0.790, 0.550, and 0.615) across error types. Inter-reader reliability among all eight readers showed moderate to substantial variability (α = 0.767, 0.595, 0.567). When each reader was individually replaced with GPT-4o and inter-reader agreement among seven readers and GPT was recalculated, the effect was not statistically significant (Δ = -0.004 to 0.002, all p > 0.05). GPT's feedback was rated helpful in most cases: 89.8%, 83.0%, and 92.0%. ChatGPT-4o can reliably identify key educational errors. It may serve as a scalable tool to support radiology education.

Topics

Journal Article

Ready to Sharpen Your Edge?

Subscribe to join 7,700+ peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.