
Latest multimodal large language models show limitations on image-based radiology exam questions.
Key Details
- 1Researchers tested ChatGPT-4v and ChatGPT-4o on 222 image-based multiple-choice questions from national radiology board exams (2020 and 2024).
- 2These LLMs have been recently trained to process both text and images.
- 3Despite advancements, significant concerns remain regarding their reliability for diagnostic tasks in radiology.
- 4The potential of such models in radiology workflows, such as report generation and diagnostic support, is still under early investigation.
Why It Matters
As large language models gain capability for image analysis, assessing their reliability is crucial for safe deployment in radiology. Failures on board-style questions highlight the need for ongoing scrutiny before clinical trust is warranted.

Source
Radiology Business
Related News

•AuntMinnie
Radiology Receives Declining Share of Industry Research Funding
Radiologists received only 1.1% of industry-funded research payments in 2024, with a continuing downward trend.

•AuntMinnie
GPT-4o AI Matches Radiologists in Follow-Up Imaging Recommendations
GPT-4o matched the performance of experienced radiologists and surpassed residents in recommending follow-up imaging from routine radiology reports.

•Cardiovascular Business
AI Leverages Head CTs for Automated Heart Risk Assessments
AI models can turn routine head CT scans into automated cardiovascular risk assessments, expanding the utility of radiology studies.