Dual-center study on AI-driven multi-label deep learning for X-ray screening of knee abnormalities.
Authors
Affiliations (7)
Affiliations (7)
- Department of Spine Surgery, The Sixth Affiliated Hospital of Xinjiang Medical University, Urumqi, 830000, Xinjiang, People's Republic of China.
- Xinjiang Key Laboratory of Artificial Intelligence Assisted Imaging Diagnosis, Department of Radiology, The First People's Hospital of Kashi Prefecture, Kashi, 844000, Xinjiang, People's Republic of China.
- Department of Sports Medicine, The First People's Hospital of Kashi Prefecture, Kashi, 844000, Xinjiang, People's Republic of China.
- Department of Orthopedics, Shandong Provincial Hospital, Shandong First Medical University, Jinan, 250014, Shandong, China.
- Department of Spine Surgery, Hanzhong Central Hospital, Hanzhong, 723011, Shaanxi, China.
- Department of Spine Surgery, The Sixth Affiliated Hospital of Xinjiang Medical University, Urumqi, 830000, Xinjiang, People's Republic of China. [email protected].
- Department of Spine Surgery, The Sixth Affiliated Hospital of Xinjiang Medical University, Urumqi, 830000, Xinjiang, People's Republic of China. [email protected].
Abstract
Knee abnormalities, such as meniscus tears and ligament injuries, are common in clinical practice and pose significant diagnostic challenges. While traditional imaging techniques-X-ray, Computed Tomography (CT) scan, and Magnetic Resonance Imaging (MRI)-are vital for assessment. However, X-rays and CT scans often fail to adequately visualize soft tissue injuries, and MRIs can be costly and time-consuming. To overcome these limitations, we developed an innovative AI-driven approach that allows for the detection of soft tissue abnormalities directly from X-ray images-a capability traditionally reserved for MRI or arthroscopy. We conducted a retrospective study with 4,215 patients from two medical centers, utilizing knee X-ray images annotated by orthopedic surgeons. The YOLOv11 model automated knee localization, while five convolutional neural networks-ResNet152, DenseNet121, MobileNetV3, ShuffleNetV2, and VGG19-were adapted for multi-label classification of eight conditions: meniscus tears (MENI), anterior cruciate ligament tears (ACL), posterior cruciate ligament injuries (PCL), medial collateral ligament injuries (MCL), lateral collateral ligament injuries (LCL), joint effusion (EFFU), bone marrow edema or contusion (CONT), and soft tissue injuries (STI). Data preprocessing involved normalization and Region of Interest (ROI) extraction, with training enhanced by spatial augmentations. Performance was assessed using mean average precision (mAP), F1-scores, and area under the curve (AUC). We also developed a Windows-based PyQt application and a Flask Web application for clinical integration, incorporating explainable AI techniques (GradCAM, ScoreCAM) for interpretability. The YOLOv11 model achieved precise knee localization with a [email protected] of 0.995. In classification, ResNet152 outperformed others, recording a mAP of 90.1% in internal testing and AUCs up to 0.863 (EFFU) in external testing. End-to-end performance on the external set yielded a mAP of 86.1% and F1-scores of 84.0% with ResNet152. The Windows and web applications successfully processed imaging data, aligning with MRI and arthroscopic findings in cases like ACL and meniscus tears. Explainable AI visualizations clarified model decisions, highlighting key regions for complex injuries, such as concurrent ligament and soft tissue damage, enhancing clinical trust. This AI-driven model markedly improved the precision and efficiency of knee abnormality detection through X-ray analysis. By accurately identifying multiple coexisting conditions in a single pass, it offered a scalable tool to enhance diagnostic workflows and patient outcomes, especially in resource-constrained areas.