
Researchers evaluated ChatGPT, Gemini, and Copilot for explaining radiology reports to patients.
Key Details
- 1Study published in 'Insights Into Imaging' compared ChatGPT, Gemini, and Copilot.
- 2100 anonymized radiology reports were input into each LLM using a standard prompt.
- 3Outputs were rated on a 3-point scale for understandability, readability, and medical accuracy.
- 4Researchers also assessed each model's use of uncertainty language and quality of patient guidance.
- 5Aim was to determine which LLM most accurately and clearly conveyed imaging findings.
Why It Matters
As patients increasingly access their own imaging reports, reliable and accurate AI explanations can improve understanding while minimizing anxiety and confusion. Ensuring LLMs perform well in this setting could help bridge communication gaps between radiologists and patients.

Source
Radiology Business
Related News

•Radiology Business
AI Guidance Cuts Novice Ultrasound Exam Time by 34%
AI guidance significantly reduces exam times and enhances diagnostic quality for novice ultrasound operators performing shoulder exams.

•Radiology Business
UCLA Appoints Inaugural Associate Dean for Health AI Strategy
UCLA has appointed Katherine P. Andriole as its first associate dean for Health AI Strategy and Innovation, with an initial focus on radiology.

•AuntMinnie
AI Models Reveal Racial Disparities in Breast Cancer Patterns
Machine learning models reveal significant racial disparities and key predictors in breast cancer incidence across diverse groups.