ChatGPT-Generated Questions Match Quality of Radiologist-Written MCQs for Residents
A study found that ChatGPT can generate multiple-choice questions for radiology residents of comparable quality to those written by attending radiologists.
Key Details
- 1Study published July 7 in Academic Radiology assessed MCQs for resident education.
- 2144 MCQs were generated by ChatGPT from lecture transcripts; 17 used in the study.
- 3Questions were mixed with 11 radiologist-written MCQs; 21 residents participated.
- 4No significant difference in perceived quality: ChatGPT MCQ mean score 6.93 vs. 7.08 for attendings.
- 5Correct answer rate similar: ChatGPT (57%), attendings (59%).
- 6Residents were less likely to identify ChatGPT questions as written by attendings.
Why It Matters

Source
AuntMinnie
Related News

BMI Significantly Impacts AI Accuracy in CT Lung Nodule Detection
New research demonstrates that high BMI negatively impacts both human and AI performance in chest low-dose CT interpretation, highlighting dataset diversity concerns.

Most FDA-Cleared AI Devices Lack Pre-Approval Safety Data, Study Finds
A new study finds fewer than 30% of FDA-cleared AI medical devices reported key safety or adverse event data before approval.

AI for Breast Cancer Screening Not Cost-Effective, Study Finds
AI-assisted breast cancer screening showed minor clinical benefits over DBT alone but was not cost-effective at standard willingness-to-pay thresholds.