A large language model (LLM) significantly outperforms RadLex in expanding terms for radiology report language standardization.
Key Details
- 1Study published in American Journal of Roentgenology compared LLM to RadLex for term expansion in radiology reports.
- 2LLM (Gemini 2.0 Flash Thinking) generated 208,465 additional variants and 69,918 synonyms beyond RadLex's expansion.
- 3LLM expansion improved lexical coverage rate to 81.9% vs. RadLex's 67.5%.
- 4Semantic recall improved to 81.6% (LLM) versus 64% (RadLex), with slightly lower precision (94.8% vs 100%).
- 5F1 score was higher for LLM expansion (0.91) compared to RadLex (0.86).
- 6Study used chest CT reports from five international datasets.
Why It Matters
Automating terminology expansion with LLMs can enhance the accuracy and scalability of natural language processing in radiology, aiding standardized reporting, AI model development, and multi-center research.

Source
AuntMinnie
Related News

•AuntMinnie
Radiology Receives Declining Share of Industry Research Funding
Radiologists received only 1.1% of industry-funded research payments in 2024, with a continuing downward trend.

•AuntMinnie
GPT-4o AI Matches Radiologists in Follow-Up Imaging Recommendations
GPT-4o matched the performance of experienced radiologists and surpassed residents in recommending follow-up imaging from routine radiology reports.

•Cardiovascular Business
AI Leverages Head CTs for Automated Heart Risk Assessments
AI models can turn routine head CT scans into automated cardiovascular risk assessments, expanding the utility of radiology studies.