Leveraging Large Language Models for Accurate AO Fracture Classification from CT Text Reports.
Authors
Affiliations (4)
Affiliations (4)
- Department of Diagnostic and Interventional Radiology, School of Medicine, TUM University Hospital, Technical University of Munich, 81675, Munich, Germany.
- Department of Diagnostic and Interventional Radiology, School of Medicine, TUM University Hospital, Technical University of Munich, 81675, Munich, Germany. [email protected].
- Department of Trauma Surgery, School of Medicine, TUM University Hospital, Technical University of Munich, 81675, Munich, Germany.
- Department of Cardiovascular Radiology and Nuclear Medicine, School of Medicine and Health, German Heart Center Munich, Technical University of Munich, 80636, Munich, Germany.
Abstract
Large language models (LLMs) have shown promising potential in analyzing complex textual data, including radiological reports. These models can assist clinicians, particularly those with limited experience, by integrating and presenting diagnostic criteria within radiological classifications. However, before clinical adoption, LLMs must be rigorously validated by medical professionals to ensure accuracy, especially in the context of advanced radiological classification systems. This study evaluates the performance of four LLMs-ChatGPT-4o, AmbossGPT, Claude 3.5 Sonnet, and Gemini 2.0 Flash-in classifying fractures based on the AO classification system using CT reports. A dataset of 292 fictitious physician-generated CT reports, representing 310 fractures, was used to assess the accuracy of each LLM in AO fracture classification retrospectively. Performance was evaluated by comparing the models' classifications to ground truth labels, with accuracy rates analyzed across different fracture types and subtypes. ChatGPT-4o and AmbossGPT achieved the highest overall accuracy (74.6 and 74.3%, respectively), outperforming Claude 3.5 Sonnet (69.5%) and Gemini 2.0 Flash (62.7%). Statistically significant differences were observed in fracture type classification, particularly between ChatGPT-4o and Gemini 2.0 Flash (Δ12%, p < 0.001). While all models demonstrated strong bone recognition rates (90-99%), their accuracy in fracture subtype classification remained lower (71-77%), indicating limitations in nuanced diagnostic categorization. LLMs show potential in assisting radiologists with initial fracture classification, particularly in high-volume or resource-limited settings. However, their performance remains inconsistent for detailed subtype classification, highlighting the need for further refinement and validation before clinical integration in advanced diagnostic workflows.