Evaluating the Diagnostic Accuracy of ChatGPT-4.0 for Classifying Multimodal Musculoskeletal Masses: A Comparative Study with Human Raters.

Authors

Bosbach WA,Schoeni L,Beisbart C,Senge JF,Mitrakovic M,Anderson SE,Achangwa NR,Divjak E,Ivanac G,Grieser T,Weber MA,Maurer MH,Sanal HT,Daneshvar K

Affiliations (13)

  • Department of Nuclear Medicine, Inselspital, Bern University Hospital, University of Bern, Bern, Switzerland.
  • Department of Diagnostic, Interventional and Paediatric Radiology (DIPR), Inselspital, Bern University Hospital, University of Bern, Bern, Switzerland.
  • Institute of Philosophy, University of Bern, Bern, Switzerland.
  • Center for Artificial Intelligence in Medicine, University of Bern, Bern, Switzerland.
  • Department of Mathematics and Computer Science, University of Bremen, Bremen, Germany.
  • Dioscuri Centre in Topological Data Analysis, Mathematical Institute PAN, Warsaw, Poland.
  • Sydney School of Medicine, University of Notre Dame Australia, Darlinghurst Sydney, Australia.
  • University of Zagreb School of Medicine, Department of Diagnostic and Interventional Radiology, University Hospital "Dubrava", Zagreb, Croatia.
  • Department of Diagnostic and Interventional Radiology, University Hospital Augsburg, Augsburg, Germany.
  • Institute of Diagnostic and Interventional Radiology, Pediatric Radiology and Neuroradiology, University Medical Center Rostock, Rostock, Germany.
  • Department of Diagnostic and Interventional Radiology, Carl von Ossietzky Universität Oldenburg, Oldenburg, Germany.
  • Radiology Department, University of Health Sciences, Gülhane Training and Research Hospital, Ankara, Turkey.
  • Department of Anatomy, Ankara University Institute of Health Sciences, Ankara, Türkiye.

Abstract

Novel artificial intelligence tools have the potential to significantly enhance productivity in medicine, while also maintaining or even improving treatment quality. In this study, we aimed to evaluate the current capability of ChatGPT-4.0 to accurately interpret multimodal musculoskeletal tumor cases.We created 25 cases, each containing images from X-ray, computed tomography, magnetic resonance imaging, or scintigraphy. ChatGPT-4.0 was tasked with classifying each case using a six-option, two-choice question, where both a primary and a secondary diagnosis were allowed. For performance evaluation, human raters also assessed the same cases.When only the primary diagnosis was taken into account, the accuracy of human raters was greater than that of ChatGPT-4.0 by a factor of nearly 2 (87% vs. 44%). However, in a setting that also considered secondary diagnoses, the performance gap shrank substantially (accuracy: 94% vs. 71%). Power analysis relying on Cohen's w confirmed the adequacy of the sample set size (n: 25).The tested artificial intelligence tool demonstrated lower performance than human raters. Considering factors such as speed, constant availability, and potential future improvements, it appears plausible that artificial intelligence tools could serve as valuable assistance systems for doctors in future clinical settings. · ChatGPT-4.0 classifies musculoskeletal cases using multimodal imaging inputs.. · Human raters outperform AI in primary diagnosis accuracy by a factor of nearly two.. · Including secondary diagnoses improves AI performance and narrows the gap.. · AI demonstrates potential as an assistive tool in future radiological workflows.. · Power analysis confirms robustness of study findings with the current sample size.. · Bosbach WA, Schoeni L, Beisbart C et al. Evaluating the Diagnostic Accuracy of ChatGPT-4.0 for Classifying Multimodal Musculoskeletal Masses: A Comparative Study with Human Raters. Rofo 2025; DOI 10.1055/a-2594-7085.

Topics

Journal Article

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.