ChatGPT-Generated Questions Match Quality of Radiologist-Written MCQs for Residents
A study found that ChatGPT can generate multiple-choice questions for radiology residents of comparable quality to those written by attending radiologists.
Key Details
- 1Study published July 7 in Academic Radiology assessed MCQs for resident education.
- 2144 MCQs were generated by ChatGPT from lecture transcripts; 17 used in the study.
- 3Questions were mixed with 11 radiologist-written MCQs; 21 residents participated.
- 4No significant difference in perceived quality: ChatGPT MCQ mean score 6.93 vs. 7.08 for attendings.
- 5Correct answer rate similar: ChatGPT (57%), attendings (59%).
- 6Residents were less likely to identify ChatGPT questions as written by attendings.
Why It Matters

Source
AuntMinnie
Related News

Deep Learning AI Outperforms Radiologists in Detecting ENE on CT
A deep learning tool, DeepENE, exceeded radiologist performance in identifying lymph node extranodal extension in head and neck cancers using preoperative CT scans.

Patients Favor AI in Imaging Diagnostics, Hesitate on Triage Use
Survey finds most patients support AI in diagnostic imaging but are reluctant about its use in triage decisions.

AI Projected to Reshape Radiologist Workload But Not Eliminate Jobs
Stanford researchers predict AI could reduce radiologist hours by up to 49% over the next five years, though workforce size is likely to remain stable due to rising imaging demand.