Retrieval-Augmented Generation with Large Language Models in Radiology: From Theory to Practice.
Authors
Affiliations (3)
Affiliations (3)
- Department of Diagnostic and Interventional Radiology, Faculty of Medicine, University of Freiburg Medical Center, University of Freiburg, Breisacher Str 64, 79106 Freiburg, Germany.
- Department of Neuroradiology, Faculty of Medicine, University of Freiburg Medical Center, University of Freiburg, Freiburg, Germany.
- Department of Stereotactic and Functional Neurosurgery, Faculty of Medicine, University of Freiburg Medical Center, University of Freiburg, Freiburg, Germany.
Abstract
<i>"Just Accepted" papers have undergone full peer review and have been accepted for publication in <i>Radiology: Artificial Intelligence</i>. This article will undergo copyediting, layout, and proof review before it is published in its final version. Please note that during production of the final copyedited article, errors may be discovered which could affect the content.</i> Large language models (LLMs) hold substantial promise in addressing the growing workload in radiology, but recent studies also reveal limitations, such as hallucinations and opacity in sources for LLM responses. Retrieval-augmented Generation (RAG) based LLMs offer a promising approach to streamline radiology workflows by integrating reliable, verifiable, and customizable information. Ongoing refinement is critical to enable RAG models to manage large amounts of input data and to engage in complex multiagent dialogues. This report provides an overview of recent advances in LLM architecture, including few-shot and zero-shot learning, RAG integration, multistep reasoning, and agentic RAG, and identifies future research directions. Exemplary cases demonstrate the practical application of these techniques in radiology practice. ©RSNA, 2025.