Evaluating Large Language Models for Turkish Emergency CT Impression Drafting: Quality, Critical Omissions, and Readability.
Authors
Affiliations (2)
Affiliations (2)
- Department of Radiology, Ankara Etlik City Hospital, Ankara, Turkey. [email protected].
- Department of Radiology, Ankara Bilkent City Hospital, Ankara, Turkey.
Abstract
The purpose of the study is to compare large language models (LLMs) for drafting Turkish emergency CT impression text and to quantify quality, critical omission risk, and readability across anatomical regions. In this retrospective observational study, 1374 emergency CT reports were screened; 802 met inclusion criteria (abdomen 204, chest 200, cranial 198, head, and neck 200). Sections were provided to four LLMs (Grok-2, ChatGPT-4o-Latest, Gemini-2.0-Flash, DeepSeek-V3-FW) to generate impression drafts. Two radiologists rated impressions on a 4-point Likert scale. We recorded omissions of predefined critical findings by region and calculated the Ateşman readability index. Likert scores varied by model and region, with higher mean scores in head and neck and cranial examinations. Critical omissions were uncommon overall but showed model- and region-specific patterns; the highest omission rate occurred in abdominal CT for one model. Readability differed by text type, with radiologist impressions and higher-performing models generally showing similar and relatively high readability. LLM-generated Turkish CT impressions can reach acceptable quality in selected settings, but occasional critical omissions persist. These tools should be used as decision-support and require clinician oversight rather than standalone deployment.