Stanford and Mayo Clinic Arizona researchers demonstrated that LLMs like GPT-4 can categorize critical findings in radiology reports using few-shot prompting.
Key Details
- 1GPT-4 and Mistral-7B LLMs tested for classifying critical findings in radiology reports from ICU patients.
- 2252 MIMIC-III reports (mixed modalities: 56% CT, ~30% x-ray, 9% MRI) and 180 external chest x-ray reports evaluated.
- 3LLMs categorized findings as true, known/expected, or equivocal critical findings.
- 4GPT-4 achieved 90.1% precision and 86.9% recall for true critical findings in internal test set; 82.6% precision and 98.3% recall in external test set.
- 5Mistral-7B showed lower precision (75.6%) but comparable recall (77.4%-93.1%).
- 6Study highlights few-shot prompting as an efficient strategy; real-world deployment requires further refinement.
Why It Matters
This research suggests that off-the-shelf LLMs could automate detection of urgent findings in radiology workflow with minimal manual annotation, potentially easing communication bottlenecks and improving patient safety. Further development and EHR integration will be needed for clinical implementation.

Source
AuntMinnie
Related News

•AuntMinnie
AI Enables Safe 75% Gadolinium Reduction in Breast MRI Without Losing Sensitivity
AI-enhanced breast MRI with a 75% reduced gadolinium dose maintained diagnostic sensitivity comparable to full-dose protocols.

•Radiology Business
NVIDIA Envisions Autonomous AI Agents Transforming Radiology
NVIDIA foresees a major shift in radiology toward autonomous AI agents and imaging systems that could revolutionize patient care.

•Cardiovascular Business
Deep Learning AI Model Detects Coronary Microvascular Dysfunction Via ECG
A new AI algorithm rapidly detects coronary microvascular dysfunction using ECGs, with validation incorporating PET imaging.