Back to all news

ChatGPT-Generated Questions Match Quality of Radiologist-Written MCQs for Residents

AuntMinnieIndustry

A study found that ChatGPT can generate multiple-choice questions for radiology residents of comparable quality to those written by attending radiologists.

Key Details

  • 1Study published July 7 in Academic Radiology assessed MCQs for resident education.
  • 2144 MCQs were generated by ChatGPT from lecture transcripts; 17 used in the study.
  • 3Questions were mixed with 11 radiologist-written MCQs; 21 residents participated.
  • 4No significant difference in perceived quality: ChatGPT MCQ mean score 6.93 vs. 7.08 for attendings.
  • 5Correct answer rate similar: ChatGPT (57%), attendings (59%).
  • 6Residents were less likely to identify ChatGPT questions as written by attendings.

Why It Matters

This validation shows that large language models like ChatGPT can support radiology education by efficiently generating high-quality, non-image-based assessment material. Such tools could address faculty participation gaps and streamline resident assessment in radiology programs.

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.