Large Language Models for Diagnosing Focal Liver Lesions From CT/MRI Reports: A Comparative Study With Radiologists.

Authors

Sheng L,Chen Y,Wei H,Che F,Wu Y,Qin Q,Yang C,Wang Y,Peng J,Bashir MR,Ronot M,Song B,Jiang H

Affiliations (6)

  • Department of Radiology and Functional and Molecular Imaging Key Laboratory of Sichuan Province, West China Hospital, Sichuan University, Chengdu, Sichuan, China.
  • Department of Radiology, The First Affiliated Hospital of Guangxi Medical University, Nanning, Guangxi, China.
  • College of Nursing, North Sichuan Medical College, Nanchong, Sichuan, China.
  • Division of Gastroenterology, Department of Radiology, Center for Advanced Magnetic Resonance in Medicine, Department of Medicine, Duke University Medical Center, Durham, North Carolina, USA.
  • Université Paris Cité, UMR 1149, CRI, Paris & Service de Radiologie, Hôpital Beaujon, APHP. Nord, Clichy, France.
  • Department of Radiology, Sanya People's Hospital, Sanya, Hainan, China.

Abstract

Whether large language models (LLMs) could be integrated into the diagnostic workflow of focal liver lesions (FLLs) remains unclear. We aimed to investigate two generic LLMs (ChatGPT-4o and Gemini) regarding their diagnostic accuracies referring to the CT/MRI reports, compared to and combined with radiologists of different experience levels. From April 2022 to April 2024, this single-center retrospective study included consecutive adult patients who underwent contrast-enhanced CT/MRI for single FLL and subsequent histopathologic examination. The LLMs were prompted by clinical information and the "findings" section of radiology reports three times to provide differential diagnoses in the descending order of likelihood, with the first considered the final diagnosis. In the research setting, six radiologists (three junior and three middle-level) independently reviewed the CT/MRI images and clinical information in two rounds (first alone, then with LLM assistance). In the clinical setting, diagnoses were retrieved from the "impressions" section of radiology reports. Diagnostic accuracy was investigated against histopathology. 228 patients (median age, 59 years; 155 males) with 228 FLLs (median size, 3.6 cm) were included. Regarding the final diagnosis, the accuracy of two-step ChatGPT-4o (78.9%) was higher than single-step ChatGPT-4o (68.0%, p < 0.001) and single-step Gemini (73.2%, p = 0.004), similar to real-world radiology reports (80.0%, p = 0.34) and junior radiologists (78.9%-82.0%; p-values, 0.21 to > 0.99), but lower than middle-level radiologists (84.6%-85.5%; p-values, 0.001 to 0.02). No incremental diagnostic value of ChatGPT-4o was observed for any radiologist (p-values, 0.63 to > 0.99). Two-step ChatGPT-4o showed matching accuracies to real-world radiology reports and junior radiologists for diagnosing FLLs but was less accurate than middle-level radiologists and demonstrated little incremental diagnostic value.

Topics

Magnetic Resonance ImagingTomography, X-Ray ComputedLiver NeoplasmsJournal ArticleComparative Study

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.