
Researchers evaluated ChatGPT, Gemini, and Copilot for explaining radiology reports to patients.
Key Details
- 1Study published in 'Insights Into Imaging' compared ChatGPT, Gemini, and Copilot.
- 2100 anonymized radiology reports were input into each LLM using a standard prompt.
- 3Outputs were rated on a 3-point scale for understandability, readability, and medical accuracy.
- 4Researchers also assessed each model's use of uncertainty language and quality of patient guidance.
- 5Aim was to determine which LLM most accurately and clearly conveyed imaging findings.
Why It Matters
As patients increasingly access their own imaging reports, reliable and accurate AI explanations can improve understanding while minimizing anxiety and confusion. Ensuring LLMs perform well in this setting could help bridge communication gaps between radiologists and patients.

Source
Radiology Business
Related News

•AuntMinnie
AI Model Accurately Detects Pediatric Physeal Fractures on X-Ray
A deep learning model accurately identifies hard-to-detect physeal fractures in children's wrist x-rays.

•AuntMinnie
AI Advancements and Studies Highlighted in Digital X-Ray Insider
This edition covers AI models for fracture detection, mortality prediction, and more, along with new research using x-ray and DEXA modalities.

•AuntMinnie
Adult-Trained Radiology AI Models Struggle in Pediatric Imaging
Adult-trained radiology AI models often underperform when applied to pediatric imaging data, according to a systematic review.