Back to all news
ChatGPT-Generated Questions Match Quality of Radiologist-Written MCQs for Residents
Tags:Research
A study found that ChatGPT can generate multiple-choice questions for radiology residents of comparable quality to those written by attending radiologists.
Key Details
- 1Study published July 7 in Academic Radiology assessed MCQs for resident education.
- 2144 MCQs were generated by ChatGPT from lecture transcripts; 17 used in the study.
- 3Questions were mixed with 11 radiologist-written MCQs; 21 residents participated.
- 4No significant difference in perceived quality: ChatGPT MCQ mean score 6.93 vs. 7.08 for attendings.
- 5Correct answer rate similar: ChatGPT (57%), attendings (59%).
- 6Residents were less likely to identify ChatGPT questions as written by attendings.
Why It Matters
This validation shows that large language models like ChatGPT can support radiology education by efficiently generating high-quality, non-image-based assessment material. Such tools could address faculty participation gaps and streamline resident assessment in radiology programs.

Source
AuntMinnie
Related News

•Radiology Business
Study Highlights Limitations of AI in Prostate MRI Screening
New research points to several shortcomings in implementing AI for MRI-based prostate cancer screening.

•AuntMinnie
Deep Learning Model Predicts Brain Tumor MRI Enhancement Without Gadolinium
German researchers developed a deep learning approach to predict MRI contrast enhancement in brain tumors without the need for gadolinium-based agents.

•AuntMinnie
Multimodal LLMs Achieve High Accuracy Detecting Scoliosis on X-rays
Multimodal LLMs achieved up to 94% accuracy for scoliosis detection on spine x-rays, but struggled with lumbar stenosis on MRI.