
A custom large language model significantly improved the identification of patients needing follow-up imaging by analyzing radiologists’ notes.
Key Details
- 1A new LLM-based tool was developed at Parkland Health to flag patients who require follow-up imaging.
- 2Traditional EHR macros and structured notes failed to efficiently capture needed follow-up recommendations.
- 3The AI model reads clinical impressions from radiologist notes to extract and standardize follow-up indications.
- 4More than 500,000 radiology studies are performed annually at the health system, emphasizing the scale and need for automation.
- 5Integration into the EHR enables real-time flagging and streamlined workflow for ensuring follow-up imaging.
Why It Matters

Source
Radiology Business
Related News

Deep Learning Model Predicts Brain Tumor MRI Enhancement Without Gadolinium
German researchers developed a deep learning approach to predict MRI contrast enhancement in brain tumors without the need for gadolinium-based agents.

Stanford Study: LLM-Generated Hospital Notes Safe, Aid Physician Wellbeing
Stanford research shows agentic LLMs can safely draft hospital discharge summaries, reducing physician burnout with minimal risk of patient harm.

Multimodal LLMs Achieve High Accuracy Detecting Scoliosis on X-rays
Multimodal LLMs achieved up to 94% accuracy for scoliosis detection on spine x-rays, but struggled with lumbar stenosis on MRI.