Back to all papers

Comparing The Efficacy Between ChatGPT 5, Grok 3, and Claude 4.5 Sonnet in Analyzing Orthopedic Trauma-Related Imaging.

March 4, 2026pubmed logopapers

Authors

Holmstrom JA,Braithwaite CL,Alhankawi AR,Moore ML,Patel KA,Miller BH

Affiliations (3)

  • Mayo Clinic Alix School of Medicine, Scottsdale, AZ.
  • Mayo Clinic Department of Orthopedic Surgery, Phoenix, AZ.
  • Sonoran Orthopaedic Trauma Surgeons, Scottsdale, AZ.

Abstract

To evaluate and compare the ability of three popular open-source artificial intelligence (AI) platforms to diagnose common trauma-related fractures using radiologic imaging. Design: Retrospective diagnostic performance comparison study. Publicly accessible online radiologic imaging databases. Five common orthopedic trauma fractures were assessed: ankle, tibial plateau, intertrochanteric, femoral neck, and humerus. Radiographs and computed tomography (CT) images were collected. Images were randomly selected from confirmed diagnoses on Radiopaedia.org. ChatGPT 5, Grok 3, and Claude 4.5 Sonnet were queried with each image. Diagnostic accuracy, sensitivity, specificity, positive and negative predictive values, and performance by modality (X-ray vs. CT) were assessed. The reference standard was the expert-verified diagnosis provided by Radiopaedia.org, limited to cases labeled with a "diagnosis certain" tag. Each model was provided with 30 radiographs and 20 CT images whenever possible. ChatGPT 5, Grok 3, and Claude 4.5 Sonnet accurately diagnosed diseased images in 26.8%, 18.8%, and 22.4% of cases, respectively. By fracture type, ChatGPT 5 demonstrated the highest correct classification rates for ankle (10%), femoral neck (38%), humerus (40%), and tibial plateau (44%) fractures. Grok 3 demonstrated the highest correct classification rate for intertrochanteric fractures (6%). Overall sensitivities were 0.267, 0.187, and 0.223 for ChatGPT 5, Grok 3, and Claude 4.5 Sonnet, respectively. ChatGPT 5 and Grok 3 outperformed Claude 4.5 Sonnet (both p<0.001). No modality-based performance differences were observed for any model. Among the publicly available large language models (LLMs) evaluated for radiologic interpretation of orthopedic trauma imaging, ChatGPT 5 demonstrated the highest overall diagnostic accuracy, followed by Claude 4.5 Sonnet and Grok 3. Despite relative variation between the models, overall diagnostic accuracy for fracture detection was low across all platforms (<27%). In their baseline forms, these publicly accessible LLMs are not recommended for radiologic imaging interpretation. Level III.

Topics

Journal Article

Ready to Sharpen Your Edge?

Subscribe to join 11k+ peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.