ChatGPT-Generated Questions Match Quality of Radiologist-Written MCQs for Residents
A study found that ChatGPT can generate multiple-choice questions for radiology residents of comparable quality to those written by attending radiologists.
Key Details
- 1Study published July 7 in Academic Radiology assessed MCQs for resident education.
- 2144 MCQs were generated by ChatGPT from lecture transcripts; 17 used in the study.
- 3Questions were mixed with 11 radiologist-written MCQs; 21 residents participated.
- 4No significant difference in perceived quality: ChatGPT MCQ mean score 6.93 vs. 7.08 for attendings.
- 5Correct answer rate similar: ChatGPT (57%), attendings (59%).
- 6Residents were less likely to identify ChatGPT questions as written by attendings.
Why It Matters

Source
AuntMinnie
Related News

Toronto Study: LLMs Must Cite Sources for Radiology Decision Support
University of Toronto researchers found that large language models (LLMs) such as DeepSeek V3 and GPT-4o offer promising support for radiology decision-making in pancreatic cancer when their recommendations cite guideline sources.

AI Model Using Mammograms Enhances Five-Year Breast Cancer Risk Assessment
A new image-only AI model more accurately predicts five-year breast cancer risk than breast density alone, according to multinational research presented at RSNA 2025.

AI Model Uses CT Scans to Reveal Biomarker for Chronic Stress
Researchers developed an AI model to measure chronic stress using adrenal gland volume on routine CT scans.