A large language model-based AI agent outperformed manual systems in identifying follow-up imaging recommendations from radiologist notes.
Key Details
- 1The AI, based on Meta's Llama-3 70B, flagged 6.18 times more follow-up imaging cases than a manual macro system (513 vs. 83 in 10,000 reports).
- 2It achieved an accuracy of 98.7% and a balanced accuracy over 97% in test evaluations.
- 3During three months in silent production, the AI flagged 9,600 studies for follow-up versus 1,145 by the macro system across 120,000 studies.
- 4The system extracted details like follow-up timing and clinical rationale with a 94% accuracy rate.
- 5The AI operated in real time without affecting clinical workflows, using prompt engineering rather than fine-tuning.
- 6The approach is scalable, but further research is needed to determine the impact on patient care outcomes.
Why It Matters

Source
AuntMinnie
Related News

Stanford Launches Merlin: 3D AI Model for Abdominal CT Interpretation
Stanford researchers introduce Merlin, a 3D vision-language AI model for interpreting abdominal CT scans, demonstrating strong performance across multiple radiology tasks.

RadNet Acquires AI Firm Gleamer in $270M Deal to Expand Radiology Solutions
RadNet will acquire Gleamer for up to $270 million, aiming to make DeepHealth the largest global provider of radiology clinical AI solutions.

ChatGPT Use Soars in Radiology Research Abstracts Since 2023
Radiology research abstracts show a marked rise in LLM-assisted editing since the release of ChatGPT.