ChatGPT-Generated Questions Match Quality of Radiologist-Written MCQs for Residents
A study found that ChatGPT can generate multiple-choice questions for radiology residents of comparable quality to those written by attending radiologists.
Key Details
- 1Study published July 7 in Academic Radiology assessed MCQs for resident education.
- 2144 MCQs were generated by ChatGPT from lecture transcripts; 17 used in the study.
- 3Questions were mixed with 11 radiologist-written MCQs; 21 residents participated.
- 4No significant difference in perceived quality: ChatGPT MCQ mean score 6.93 vs. 7.08 for attendings.
- 5Correct answer rate similar: ChatGPT (57%), attendings (59%).
- 6Residents were less likely to identify ChatGPT questions as written by attendings.
Why It Matters

Source
AuntMinnie
Related News

AI Models Reveal Racial Disparities in Breast Cancer Patterns
Machine learning models reveal significant racial disparities and key predictors in breast cancer incidence across diverse groups.

AI Algorithm Streamlines and Standardizes Shoulder Ultrasound Acquisition
A multitask AI system demonstrated high accuracy in standardizing and guiding shoulder musculoskeletal ultrasound imaging.

Deepfake X-rays Fool Radiologists and AI, Raising Security Concerns
Both radiologists and AI models struggle to differentiate between authentic and AI-generated ('deepfake') radiographic images, raising major security and clinical concerns.