A fine-tuned, domain-specific LLM (LLM-RadSum) outperforms GPT-4o in accurately summarizing radiology reports across multiple patient demographics and modalities.
Key Details
- 1LLM-RadSum, based on Llama2, was trained and evaluated on over 1 million CT and MRI radiology reports from five hospitals.
- 2The model achieved higher F1 scores in summarization compared to GPT-4o (0.58 vs. 0.3, p < 0.001), consistent across anatomic regions, modalities, sex, and ages.
- 388.9% of LLM-RadSum's outputs were 'completely consistent' with original reports, versus 43.1% for GPT-4o.
- 481.5% of LLM-RadSum outputs met senior radiologists’ standards for safety and clinical use; most GPT-4o outputs required minor edits.
- 5Human evaluation included 1,800 randomly selected reports, underscoring generalizability within diverse hospital settings.
Why It Matters

Source
AuntMinnie
Related News

AI's Expanding Role in Healthcare and Implications for Radiology
A series of thought leaders and institutions weigh in on AI's transformative potential in healthcare, with emphasis on radiology adoption and responsible use.

Most Radiology AI Users Lack Clear Evidence of Financial ROI
Survey finds over 75% of radiology organizations using AI lack clear, quantified ROI data.

Toronto Study: LLMs Must Cite Sources for Radiology Decision Support
University of Toronto researchers found that large language models (LLMs) such as DeepSeek V3 and GPT-4o offer promising support for radiology decision-making in pancreatic cancer when their recommendations cite guideline sources.