Stanford and Mayo Clinic Arizona researchers demonstrated that LLMs like GPT-4 can categorize critical findings in radiology reports using few-shot prompting.
Key Details
- 1GPT-4 and Mistral-7B LLMs tested for classifying critical findings in radiology reports from ICU patients.
- 2252 MIMIC-III reports (mixed modalities: 56% CT, ~30% x-ray, 9% MRI) and 180 external chest x-ray reports evaluated.
- 3LLMs categorized findings as true, known/expected, or equivocal critical findings.
- 4GPT-4 achieved 90.1% precision and 86.9% recall for true critical findings in internal test set; 82.6% precision and 98.3% recall in external test set.
- 5Mistral-7B showed lower precision (75.6%) but comparable recall (77.4%-93.1%).
- 6Study highlights few-shot prompting as an efficient strategy; real-world deployment requires further refinement.
Why It Matters

Source
AuntMinnie
Related News

Patients Favor AI in Imaging Diagnostics, Hesitate on Triage Use
Survey finds most patients support AI in diagnostic imaging but are reluctant about its use in triage decisions.

FDA Eases Path for AI in Clinical Decision Support and Healthcare Innovation
FDA publishes new guidance to promote innovation in general wellness and clinical decision support, impacting medical AI including radiology.

Deep Learning AI Outperforms Radiologists in Detecting ENE on CT
A deep learning tool, DeepENE, exceeded radiologist performance in identifying lymph node extranodal extension in head and neck cancers using preoperative CT scans.