Performance analysis of large language models in multi-disease detection from chest computed tomography reports: a comparative study: Experimental Research.
Luo P, Fan C, Li A, Jiang T, Jiang A, Qi C, Gan W, Zhu L, Mou W, Zeng D, Tang B, Xiao M, Chu G, Liang Z, Shen J, Liu Z, Wei T, Cheng Q, Lin A, Chen X
•papers•Jun 5 2025Computed Tomography (CT) is widely acknowledged as the gold standard for diagnosing thoracic diseases. However, the accuracy of interpretation significantly depends on radiologists' expertise. Large Language Models (LLMs) have shown considerable promise in various medical applications, particularly in radiology. This study aims to assess the performance of leading LLMs in analyzing unstructured chest CT reports and to examine how different questioning methodologies and fine-tuning strategies influence their effectiveness in enhancing chest CT diagnosis. This retrospective analysis evaluated 13,489 chest CT reports encompassing 13 common thoracic conditions across pulmonary, cardiovascular, pleural, and upper abdominal systems. Five LLMs (Claude-3.5-Sonnet, GPT-4, GPT-3.5-Turbo, Gemini-Pro, Qwen-Max) were assessed using dual questioning methodologies: multiple-choice and open-ended. Radiologist-curated datasets underwent rigorous preprocessing, including RadLex terminology standardization, multi-step diagnostic validation, and exclusion of ambiguous cases. Model performance was quantified via Subjective Answer Accuracy Rate (SAAR), Reference Answer Accuracy Rate (RAAR), and Area Under the Receiver Operating Characteristic (ROC) Curve analysis. GPT-3.5-Turbo underwent fine-tuning (100 iterations with one training epoch) on 200 high-performing cases to enhance diagnostic precision for initially misclassified conditions. GPT-4 demonstrated superior performance with the highest RAAR of 75.1% in multiple-choice questioning, followed by Qwen-Max (66.0%) and Claude-3.5 (63.5%), significantly outperforming GPT-3.5-Turbo (41.8%) and Gemini-Pro (40.8%) across the entire patient cohort. Multiple-choice questioning consistently improved both RAAR and SAAR for all models compared to open-ended questioning, with RAAR consistently surpassing SAAR. Model performance demonstrated notable variations across different diseases and organ conditions. Notably, fine-tuning substantially enhanced the performance of GPT-3.5-Turbo, which initially exhibited suboptimal results in most scenarios. This study demonstrated that general-purpose LLMs can effectively interpret chest CT reports, with performance varying significantly across models depending on the questioning methodology and fine-tuning approaches employed. For surgical practice, these findings provided evidence-based guidance for selecting appropriate AI tools to enhance preoperative planning, particularly for thoracic procedures. The integration of optimized LLMs into surgical workflows may improve decision-making efficiency, risk stratification, and diagnostic speed, potentially contributing to better surgical outcomes through more accurate preoperative assessment.