A large language model (LLM) significantly outperforms RadLex in expanding terms for radiology report language standardization.
Key Details
- 1Study published in American Journal of Roentgenology compared LLM to RadLex for term expansion in radiology reports.
- 2LLM (Gemini 2.0 Flash Thinking) generated 208,465 additional variants and 69,918 synonyms beyond RadLex's expansion.
- 3LLM expansion improved lexical coverage rate to 81.9% vs. RadLex's 67.5%.
- 4Semantic recall improved to 81.6% (LLM) versus 64% (RadLex), with slightly lower precision (94.8% vs 100%).
- 5F1 score was higher for LLM expansion (0.91) compared to RadLex (0.86).
- 6Study used chest CT reports from five international datasets.
Why It Matters

Source
AuntMinnie
Related News

AI Models Reveal Racial Disparities in Breast Cancer Patterns
Machine learning models reveal significant racial disparities and key predictors in breast cancer incidence across diverse groups.

AI Algorithm Streamlines and Standardizes Shoulder Ultrasound Acquisition
A multitask AI system demonstrated high accuracy in standardizing and guiding shoulder musculoskeletal ultrasound imaging.

Deepfake X-rays Fool Radiologists and AI, Raising Security Concerns
Both radiologists and AI models struggle to differentiate between authentic and AI-generated ('deepfake') radiographic images, raising major security and clinical concerns.