ChatGPT-Generated Questions Match Quality of Radiologist-Written MCQs for Residents
July 8, 2025
A study found that ChatGPT can generate multiple-choice questions for radiology residents of comparable quality to those written by attending radiologists.
Key Details
- Study published July 7 in Academic Radiology assessed MCQs for resident education.
- 144 MCQs were generated by ChatGPT from lecture transcripts; 17 used in the study.
- Questions were mixed with 11 radiologist-written MCQs; 21 residents participated.
- No significant difference in perceived quality: ChatGPT MCQ mean score 6.93 vs. 7.08 for attendings.
- Correct answer rate similar: ChatGPT (57%), attendings (59%).
- Residents were less likely to identify ChatGPT questions as written by attendings.
Why It Matters
This validation shows that large language models like ChatGPT can support radiology education by efficiently generating high-quality, non-image-based assessment material. Such tools could address faculty participation gaps and streamline resident assessment in radiology programs.