ChatGPT-Generated Questions Match Quality of Radiologist-Written MCQs for Residents
A study found that ChatGPT can generate multiple-choice questions for radiology residents of comparable quality to those written by attending radiologists.
Key Details
- 1Study published July 7 in Academic Radiology assessed MCQs for resident education.
- 2144 MCQs were generated by ChatGPT from lecture transcripts; 17 used in the study.
- 3Questions were mixed with 11 radiologist-written MCQs; 21 residents participated.
- 4No significant difference in perceived quality: ChatGPT MCQ mean score 6.93 vs. 7.08 for attendings.
- 5Correct answer rate similar: ChatGPT (57%), attendings (59%).
- 6Residents were less likely to identify ChatGPT questions as written by attendings.
Why It Matters

Source
AuntMinnie
Related News

Women's Uncertainty About AI in Breast Imaging May Limit Acceptance
Many women remain unclear about the role of AI in breast imaging, creating hesitation toward its adoption.

Stanford Team Introduces Real-Time AI Safety Monitoring for Radiology
Stanford researchers introduced an ensemble monitoring model to provide real-time confidence assessments for FDA-cleared radiology AI tools.

Head-to-Head Study Evaluates AI Accuracy in Fracture Detection on X-Ray
A prospective study compared three commercial AI tools for fracture detection on x-ray, showing moderate-to-high accuracy for simple cases but weaker performance in complex scenarios.