Sort by:
Page 8 of 19190 results

Automated Classification of Cervical Spinal Stenosis using Deep Learning on CT Scans.

Zhang YL, Huang JW, Li KY, Li HL, Lin XX, Ye HB, Chen YH, Tian NF

pubmed logopapersJun 3 2025
Retrospective study. To develop and validate a computed tomography-based deep learning(DL) model for diagnosing cervical spinal stenosis(CSS). Although magnetic resonance imaging (MRI) is widely used for diagnosing CSS, its inherent limitations, including prolonged scanning time, limited availability in resource-constrained settings, and contraindications for patients with metallic implants, make computed tomography (CT) a critical alternative in specific clinical scenarios. The development of CT-based DL models for CSS detection holds promise in transcending the diagnostic efficacy limitations of conventional CT imaging, thereby serving as an intelligent auxiliary tool to optimize healthcare resource allocation. Paired CT/MRI images were collected. CT images were divided into training, validation, and test sets in an 8:1:1 ratio. The two-stage model architecture employed: (1) a Faster R-CNN-based detection model for localization, annotation, and extraction of regions of interest (ROI); (2) comparison of 16 convolutional neural network (CNN) models for stenosis classification to select the best-performing model. The evaluation metrics included accuracy, F1-score, and Cohen's κ coefficient, with comparisons made against diagnostic results from physicians with varying years of experience. In the multiclass classification task, four high-performing models (DL1-b0, DL2-121, DL3-101, and DL4-26d) achieved accuracies of 88.74%, 89.40%, 89.40%, and 88.08%, respectively. All models demonstrated >80% consistency with senior physicians and >70% consistency with junior physicians.In the binary classification task, the models achieved accuracies of 94.70%, 96.03%, 96.03%, and 94.70%, respectively. All four models demonstrated consistency rates slightly below 90% with junior physicians. However, when compared with senior physicians, three models (excluding DL4-26d) exhibited consistency rates exceeding 90%. The DL model developed in this study demonstrated high accuracy in CT image analysis of CSS, with a diagnostic performance comparable to that of senior physicians.

MobileTurkerNeXt: investigating the detection of Bankart and SLAP lesions using magnetic resonance images.

Gurger M, Esmez O, Key S, Hafeez-Baig A, Dogan S, Tuncer T

pubmed logopapersJun 2 2025
The landscape of computer vision is predominantly shaped by two groundbreaking methodologies: transformers and convolutional neural networks (CNNs). In this study, we aim to introduce an innovative mobile CNN architecture designed for orthopedic imaging that efficiently identifies both Bankart and SLAP lesions. Our approach involved the collection of two distinct magnetic resonance (MR) image datasets, with the primary goal of automating the detection of Bankart and SLAP lesions. A novel mobile CNN, dubbed MobileTurkerNeXt, forms the cornerstone of this research. This newly developed model, comprising roughly 1 million trainable parameters, unfolds across four principal stages: the stem, main, downsampling, and output phases. The stem phase incorporates three convolutional layers to initiate feature extraction. In the main phase, we introduce an innovative block, drawing inspiration from ConvNeXt, EfficientNet, and ResNet architectures. The downsampling phase utilizes patchify average pooling and pixel-wise convolution to effectively reduce spatial dimensions, while the output phase is meticulously engineered to yield classification outcomes. Our experimentation with MobileTurkerNeXt spanned three comparative scenarios: Bankart versus normal, SLAP versus normal, and a tripartite comparison of Bankart, SLAP, and normal cases. The model demonstrated exemplary performance, achieving test classification accuracies exceeding 96% across these scenarios. The empirical results underscore the MobileTurkerNeXt's superior classification process in differentiating among Bankart, SLAP, and normal conditions in orthopedic imaging. This underscores the potential of our proposed mobile CNN in advancing diagnostic capabilities and contributing significantly to the field of medical image analysis.

MRI Radiomics based on paraspinal muscle for prediction postoperative outcomes in lumbar degenerative spondylolisthesis.

Yu Y, Xu W, Li X, Zeng X, Su Z, Wang Q, Li S, Liu C, Wang Z, Wang S, Liao L, Zhang J

pubmed logopapersJun 2 2025
This study aims to develop an paraspinal muscle-based radiomics model using a machine learning approach and assess its utility in predicting postoperative outcomes among patients with lumbar degenerative spondylolisthesis (LDS). This retrospective study included a total of 155 patients diagnosed with LDS who underwent single-level posterior lumbar interbody fusion (PLIF) surgery between January 2021 and October 2023. The patients were divided into train and test cohorts in a ratio of 8:2.Radiomics features were extracted from axial T2-weighted lumbar MRI, and seven machine learning models were developed after selecting the most relevant radiomic features using T-test, Pearson correlation, and Lasso. A combined model was then created by integrating both clinical and radiomics features. The performance of the models was evaluated through ROC, sensitivity, and specificity, while their clinical utility was assessed using AUC and Decision Curve Analysis (DCA). The LR model demonstrated robust predictive performance compared to the other machine learning models evaluated in the study. The combined model, integrating both clinical and radiomic features, exhibited an AUC of 0.822 (95% CI, 0.761-0.883) in the training cohorts and 0.826 (95% CI, 0.766-0.886) in the test cohorts, indicating substantial predictive capability. Moreover, the combined model showed superior clinical benefit and increased classification accuracy when compared to the radiomics model alone. The findings suggest that the combined model holds promise for accurately predicting postoperative outcomes in patients with LDS and could be valuable in guiding treatment strategies and assisting clinicians in making informed clinical decisions for LDS patients.

Performance Comparison of Machine Learning Using Radiomic Features and CNN-Based Deep Learning in Benign and Malignant Classification of Vertebral Compression Fractures Using CT Scans.

Yeom JC, Park SH, Kim YJ, Ahn TR, Kim KG

pubmed logopapersJun 2 2025
Distinguishing benign from malignant vertebral compression fractures is critical for clinical management but remains challenging on contrast-enhanced abdominal CT, which lacks the soft tissue contrast of MRI. This study evaluates and compares radiomic feature-based machine learning and convolutional neural network-based deep learning models for classifying VCFs using abdominal CT. A retrospective cohort of 447 vertebral compression fractures (196 benign, 251 malignant) from 286 patients was analyzed. Radiomic features were extracted using PyRadiomics, with Recursive Feature Elimination selecting six key texture-based features (e.g., Run Variance, Dependence Non-Uniformity Normalized), highlighting textural heterogeneity as a malignancy marker. Machine learning models (XGBoost, SVM, KNN, Random Forest) and a 3D CNN were trained on CT data, with performance assessed via precision, recall, F1 score, accuracy, and AUC. The deep learning model achieved marginally superior overall performance, with a statistically significant higher AUC (77.66% vs. 75.91%, p < 0.05) and better precision, F1 score, and accuracy compared to the top-performing machine learning model (XGBoost). Deep learning's attention maps localized diagnostically relevant regions, mimicking radiologists' focus, whereas radiomics lacked spatial interpretability despite offering quantifiable biomarkers. This study underscores the complementary strengths of machine learning and deep learning: radiomics provides interpretable features tied to tumor heterogeneity, while DL autonomously extracts high-dimensional patterns with spatial explainability. Integrating both approaches could enhance diagnostic accuracy and clinician trust in abdominal CT-based VCF assessment. Limitations include retrospective single-center data and potential selection bias. Future multi-center studies with diverse protocols and histopathological validation are warranted to generalize these findings.

Deep learning-based acceleration of high-resolution compressed sense MR imaging of the hip.

Marka AW, Meurer F, Twardy V, Graf M, Ebrahimi Ardjomand S, Weiss K, Makowski MR, Gersing AS, Karampinos DC, Neumann J, Woertler K, Banke IJ, Foreman SC

pubmed logopapersJun 1 2025
To evaluate a Compressed Sense Artificial Intelligence framework (CSAI) incorporating parallel imaging, compressed sense (CS), and deep learning for high-resolution MRI of the hip, comparing it with standard-resolution CS imaging. Thirty-two patients with femoroacetabular impingement syndrome underwent 3 T MRI scans. Coronal and sagittal intermediate-weighted TSE sequences with fat saturation were acquired using CS (0.6 ×0.8 mm resolution) and CSAI (0.3 ×0.4 mm resolution) protocols in comparable acquisition times (7:49 vs. 8:07 minutes for both planes). Two readers systematically assessed the depiction of the acetabular and femoral cartilage (in five cartilage zones), labrum, ligamentum capitis femoris, and bone using a five-point Likert scale. Diagnostic confidence and abnormality detection were recorded and analyzed using the Wilcoxon signed-rank test. CSAI significantly improved the cartilage depiction across most cartilage zones compared to CS. Overall Likert scores were 4.0 ± 0.2 (CS) vs 4.2 ± 0.6 (CSAI) for reader 1 and 4.0 ± 0.2 (CS) vs 4.3 ± 0.6 (CSAI) for reader 2 (p ≤ 0.001). Diagnostic confidence increased from 3.5 ± 0.7 and 3.9 ± 0.6 (CS) to 4.0 ± 0.6 and 4.1 ± 0.7 (CSAI) for readers 1 and 2, respectively (p ≤ 0.001). More cartilage lesions were detected with CSAI, with significant improvements in diagnostic confidence in certain cartilage zones such as femoral zone C and D for both readers. Labrum and ligamentum capitis femoris depiction remained similar, while bone depiction was rated lower. No abnormalities detected in CS were missed in CSAI. CSAI provides high-resolution hip MR images with enhanced cartilage depiction without extending acquisition times, potentially enabling more precise hip cartilage assessment.

Semantic segmentation for individual thigh skeletal muscles of athletes on magnetic resonance images.

Kasahara J, Ozaki H, Matsubayashi T, Takahashi H, Nakayama R

pubmed logopapersJun 1 2025
The skeletal muscles that athletes should train vary depending on their discipline and position. Therefore, individual skeletal muscle cross-sectional area assessment is important in the development of training strategies. To measure the cross-sectional area of skeletal muscle, manual segmentation of each muscle is performed using magnetic resonance (MR) imaging. This task is time-consuming and requires significant effort. Additionally, interobserver variability can sometimes be problematic. The purpose of this study was to develop an automated computerized method for semantic segmentation of individual thigh skeletal muscles from MR images of athletes. Our database consisted of 697 images from the thighs of 697 elite athletes. The images were randomly divided into a training dataset (70%), a validation dataset (10%), and a test dataset (20%). A label image was generated for each image by manually annotating 15 object classes: 12 different skeletal muscles, fat, bones, and vessels and nerves. Using the validation dataset, DeepLab v3+ was chosen from three different semantic segmentation models as a base model for segmenting individual thigh skeletal muscles. The feature extractor in DeepLab v3+ was also optimized to ResNet50. The mean Jaccard index and Dice index for the proposed method were 0.853 and 0.916, respectively, which were significantly higher than those from conventional DeepLab v3+ (Jaccard index: 0.810, p < .001; Dice index: 0.887, p < .001). The proposed method achieved a mean area error for 15 objective classes of 3.12%, useful in the assessment of skeletal muscle cross-sectional area from MR images.

Beyond traditional orthopaedic data analysis: AI, multimodal models and continuous monitoring.

Oettl FC, Zsidai B, Oeding JF, Hirschmann MT, Feldt R, Tischer T, Samuelsson K

pubmed logopapersJun 1 2025
Multimodal artificial intelligence (AI) has the potential to revolutionise healthcare by enabling the simultaneous processing and integration of various data types, including medical imaging, electronic health records, genomic information and real-time data. This review explores the current applications and future potential of multimodal AI across healthcare, with a particular focus on orthopaedic surgery. In presurgical planning, multimodal AI has demonstrated significant improvements in diagnostic accuracy and risk prediction, with studies reporting an Area under the receiving operator curve presenting good to excellent performance across various orthopaedic conditions. Intraoperative applications leverage advanced imaging and tracking technologies to enhance surgical precision, while postoperative care has been advanced through continuous patient monitoring and early detection of complications. Despite these advances, significant challenges remain in data integration, standardisation, and privacy protection. Technical solutions such as federated learning (allowing decentralisation of models) and edge computing (allowing data analysis to happen on site or closer to site instead of multipurpose datacenters) are being developed to address these concerns while maintaining compliance with regulatory frameworks. As this field continues to evolve, the integration of multimodal AI promises to advance personalised medicine, improve patient outcomes, and transform healthcare delivery through more comprehensive and nuanced analysis of patient data. Level of Evidence: Level V.

Generative artificial intelligence enables the generation of bone scintigraphy images and improves generalization of deep learning models in data-constrained environments.

Haberl D, Ning J, Kluge K, Kumpf K, Yu J, Jiang Z, Constantino C, Monaci A, Starace M, Haug AR, Calabretta R, Camoni L, Bertagna F, Mascherbauer K, Hofer F, Albano D, Sciagra R, Oliveira F, Costa D, Nitsche C, Hacker M, Spielvogel CP

pubmed logopapersJun 1 2025
Advancements of deep learning in medical imaging are often constrained by the limited availability of large, annotated datasets, resulting in underperforming models when deployed under real-world conditions. This study investigated a generative artificial intelligence (AI) approach to create synthetic medical images taking the example of bone scintigraphy scans, to increase the data diversity of small-scale datasets for more effective model training and improved generalization. We trained a generative model on <sup>99m</sup>Tc-bone scintigraphy scans from 9,170 patients in one center to generate high-quality and fully anonymized annotated scans of patients representing two distinct disease patterns: abnormal uptake indicative of (i) bone metastases and (ii) cardiac uptake indicative of cardiac amyloidosis. A blinded reader study was performed to assess the clinical validity and quality of the generated data. We investigated the added value of the generated data by augmenting an independent small single-center dataset with synthetic data and by training a deep learning model to detect abnormal uptake in a downstream classification task. We tested this model on 7,472 scans from 6,448 patients across four external sites in a cross-tracer and cross-scanner setting and associated the resulting model predictions with clinical outcomes. The clinical value and high quality of the synthetic imaging data were confirmed by four readers, who were unable to distinguish synthetic scans from real scans (average accuracy: 0.48% [95% CI 0.46-0.51]), disagreeing in 239 (60%) of 400 cases (Fleiss' kappa: 0.18). Adding synthetic data to the training set improved model performance by a mean (± SD) of 33(± 10)% AUC (p < 0.0001) for detecting abnormal uptake indicative of bone metastases and by 5(± 4)% AUC (p < 0.0001) for detecting uptake indicative of cardiac amyloidosis across both internal and external testing cohorts, compared to models without synthetic training data. Patients with predicted abnormal uptake had adverse clinical outcomes (log-rank: p < 0.0001). Generative AI enables the targeted generation of bone scintigraphy images representing different clinical conditions. Our findings point to the potential of synthetic data to overcome challenges in data sharing and in developing reliable and prognostic deep learning models in data-limited environments.

Managing class imbalance in the training of a large language model to predict patient selection for total knee arthroplasty: Results from the Artificial intelligence to Revolutionise the patient Care pathway in Hip and knEe aRthroplastY (ARCHERY) project.

Farrow L, Anderson L, Zhong M

pubmed logopapersJun 1 2025
This study set out to test the efficacy of different techniques used to manage to class imbalance, a type of data bias, in application of a large language model (LLM) to predict patient selection for total knee arthroplasty (TKA). This study utilised data from the Artificial Intelligence to Revolutionise the Patient Care Pathway in Hip and Knee Arthroplasty (ARCHERY) project (ISRCTN18398037). Data included the pre-operative radiology reports of patients referred to secondary care for knee-related complaints from within the North of Scotland. A clinically based LLM (GatorTron) was trained regarding prediction of selection for TKA. Three methods for managing class imbalance were assessed: a standard model, use of class weighting, and majority class undersampling. A total of 7707 individual knee radiology reports were included (dated from 2015 to 2022). The mean text length was 74 words (range 26-275). Only 910/7707 (11.8%) patients underwent TKA surgery (the designated 'minority class'). Class weighting technique performed better for minority class discrimination and calibration compared with the other two techniques (Recall 0.61/AUROC 0.73 for class weighting compared with 0.54/0.70 and 0.59/0.72 for the standard model and majority class undersampling, respectively. There was also significant data loss for majority class undersampling when compared with class-weighting. Use of class-weighting appears to provide the optimal method of training a an LLM to perform analytical tasks on free-text clinical information in the face of significant data bias ('class imbalance'). Such knowledge is an important consideration in the development of high-performance clinical AI models within Trauma and Orthopaedics.

Integrating finite element analysis and physics-informed neural networks for biomechanical modeling of the human lumbar spine.

Ahmadi M, Biswas D, Paul R, Lin M, Tang Y, Cheema TS, Engeberg ED, Hashemi J, Vrionis FD

pubmed logopapersJun 1 2025
Comprehending the biomechanical characteristics of the human lumbar spine is crucial for managing and preventing spinal disorders. Precise material properties derived from patient-specific CT scans are essential for simulations to accurately mimic real-life scenarios, which is invaluable in creating effective surgical plans. The integration of Finite Element Analysis (FEA) with Physics-Informed Neural Networks (PINNs) offers significant clinical benefits by automating lumbar spine segmentation and meshing. We developed a FEA model of the lumbar spine incorporating detailed anatomical and material properties derived from high-quality CT and MRI scans. The model includes vertebrae and intervertebral discs, segmented and meshed using advanced imaging and computational techniques. PINNs were implemented to integrate physical laws directly into the neural network training process, ensuring that the predictions of material properties adhered to the governing equations of mechanics. The model achieved an accuracy of 94.30% in predicting material properties such as Young's modulus (14.88 GPa for cortical bone and 1.23 MPa for intervertebral discs), Poisson's ratio (0.25 and 0.47, respectively), bulk modulus (9.87 GPa and 6.56 MPa, respectively), and shear modulus (5.96 GPa and 0.42 MPa, respectively). We developed a lumbar spine FEA model using anatomical and material properties from CT and MRI scans. Vertebrae and discs were segmented and meshed with advanced imaging techniques, while PINNs ensured material predictions followed mechanical laws. The integration of FEA and PINNs allows for accurate, automated prediction of material properties and mechanical behaviors of the lumbar spine, significantly reducing manual input and enhancing reliability. This approach ensures dependable biomechanical simulations and supports the development of personalized treatment plans and surgical strategies, ultimately improving clinical outcomes for spinal disorders. This method improves surgical planning and outcomes, contributing to better patient care and recovery in spinal disorders.
Page 8 of 19190 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.