Sort by:
Page 142 of 1781774 results

Research on multi-algorithm and explainable AI techniques for predictive modeling of acute spinal cord injury using multimodal data.

Tai J, Wang L, Xie Y, Li Y, Fu H, Ma X, Li H, Li X, Yan Z, Liu J

pubmed logopapersMay 29 2025
Machine learning technology has been extensively applied in the medical field, particularly in the context of disease prediction and patient rehabilitation assessment. Acute spinal cord injury (ASCI) is a sudden trauma that frequently results in severe neurological deficits and a significant decline in quality of life. Early prediction of neurological recovery is crucial for the personalized treatment planning. While extensively explored in other medical fields, this study is the first to apply multiple machine learning methods and Shapley Additive Explanations (SHAP) analysis specifically to ASCI for predicting neurological recovery. A total of 387 ASCI patients were included, with clinical, imaging, and laboratory data collected. Key features were selected using univariate analysis, Lasso regression, and other feature selection techniques, integrating clinical, radiomics, and laboratory data. A range of machine learning models, including XGBoost, Logistic Regression, KNN, SVM, Decision Tree, Random Forest, LightGBM, ExtraTrees, Gradient Boosting, and Gaussian Naive Bayes, were evaluated, with Gaussian Naive Bayes exhibiting the best performance. Radiomics features extracted from T2-weighted fat-suppressed MRI scans, such as original_glszm_SizeZoneNonUniformity and wavelet-HLL_glcm_SumEntropy, significantly enhanced predictive accuracy. SHAP analysis identified critical clinical features, including IMLL, INR, BMI, Cys C, and RDW-CV, in the predictive model. The model was validated and demonstrated excellent performance across multiple metrics. The clinical utility and interpretability of the model were further enhanced through the application of patient clustering and nomogram analysis. This model has the potential to serve as a reliable tool for clinicians in the formulation of personalized treatment plans and prognosis assessment.

Exploring best-performing radiomic features with combined multilevel discrete wavelet decompositions for multiclass COVID-19 classification using chest X-ray images.

Özcan H

pubmed logopapersMay 29 2025
Discrete wavelet transforms have been applied in many machine learning models for the analysis of COVID-19; however, little is known about the impact of combined multilevel wavelet decompositions for the disease identification. This study proposes a computer-aided diagnosis system for addressing the combined multilevel effects of multiscale radiomic features on multiclass COVID-19 classification using chest X-ray images. A two-level discrete wavelet transform was applied to an optimal region of interest to obtain multiscale decompositions. Both approximation and detail coefficients were extensively investigated in varying frequency bands through 1240 experimental models. High dimensionality in the feature space was managed using a proposed filter- and wrapper-based feature selection approach. A comprehensive comparison was conducted between the bands and features to explore best-performing ensemble algorithm models. The results indicated that incorporating multilevel decompositions could lead to improved model performance. An inclusive region of interest, encompassing both lungs and the mediastinal regions, was identified to enhance feature representation. The light gradient-boosting machine, applied on combined bands with the features of basic, gray-level, Gabor, histogram of oriented gradients and local binary patterns, achieved the highest weighted precision, sensitivity, specificity, and accuracy of 97.50 %, 97.50 %, 98.75 %, and 97.50 %, respectively. The COVID-19-versus-the-rest receiver operating characteristic area under the curve was 0.9979. These results underscore the potential of combining decomposition levels with the original signals and employing an inclusive region of interest for effective COVID-19 detection, while the feature selection and training processes remain efficient within a practical computational time.

Artificial Intelligence in Value-Based Health Care.

Shah R, Bozic KJ, Jayakumar P

pubmed logopapersMay 28 2025
Artificial intelligence (AI) presents new opportunities to advance value-based healthcare in orthopedic surgery through 3 potential mechanisms: agency, automation, and augmentation. AI may enhance patient agency through improved health literacy and remote monitoring while reducing costs through triage and reduction in specialist visits. In automation, AI optimizes operating room scheduling and streamlines administrative tasks, with documented cost savings and improved efficiency. For augmentation, AI has been shown to be accurate in diagnostic imaging interpretation and surgical planning, while enabling more precise outcome predictions and personalized treatment approaches. However, implementation faces substantial challenges, including resistance from healthcare professionals, technical barriers to data quality and privacy, and significant financial investments required for infrastructure. Success in healthcare AI integration requires careful attention to regulatory frameworks, data privacy, and clinical validation.

An AI system for continuous knee osteoarthritis severity grading: An anomaly detection inspired approach with few labels.

Belton N, Lawlor A, Curran KM

pubmed logopapersMay 28 2025
The diagnostic accuracy and subjectivity of existing Knee Osteoarthritis (OA) ordinal grading systems has been a subject of on-going debate and concern. Existing automated solutions are trained to emulate these imperfect systems, whilst also being reliant on large annotated databases for fully-supervised training. This work proposes a three stage approach for automated continuous grading of knee OA that is built upon the principles of Anomaly Detection (AD); learning a robust representation of healthy knee X-rays and grading disease severity based on its distance to the centre of normality. In the first stage, SS-FewSOME is proposed, a self-supervised AD technique that learns the 'normal' representation, requiring only examples of healthy subjects and <3% of the labels that existing methods require. In the second stage, this model is used to pseudo label a subset of unlabelled data as 'normal' or 'anomalous', followed by denoising of pseudo labels with CLIP. The final stage involves retraining on labelled and pseudo labelled data using the proposed Dual Centre Representation Learning (DCRL) which learns the centres of two representation spaces; normal and anomalous. Disease severity is then graded based on the distance to the learned centres. The proposed methodology outperforms existing techniques by margins of up to 24% in terms of OA detection and the disease severity scores correlate with the Kellgren-Lawrence grading system at the same level as human expert performance. Code available at https://github.com/niamhbelton/SS-FewSOME_Disease_Severity_Knee_Osteoarthritis.

Single Domain Generalization for Alzheimer's Detection from 3D MRIs with Pseudo-Morphological Augmentations and Contrastive Learning

Zobia Batool, Huseyin Ozkan, Erchan Aptoula

arxiv logopreprintMay 28 2025
Although Alzheimer's disease detection via MRIs has advanced significantly thanks to contemporary deep learning models, challenges such as class imbalance, protocol variations, and limited dataset diversity often hinder their generalization capacity. To address this issue, this article focuses on the single domain generalization setting, where given the data of one domain, a model is designed and developed with maximal performance w.r.t. an unseen domain of distinct distribution. Since brain morphology is known to play a crucial role in Alzheimer's diagnosis, we propose the use of learnable pseudo-morphological modules aimed at producing shape-aware, anatomically meaningful class-specific augmentations in combination with a supervised contrastive learning module to extract robust class-specific representations. Experiments conducted across three datasets show improved performance and generalization capacity, especially under class imbalance and imaging protocol variations. The source code will be made available upon acceptance at https://github.com/zobia111/SDG-Alzheimer.

Artificial Intelligence Augmented Cerebral Nuclear Imaging.

Currie GM, Hawk KE

pubmed logopapersMay 28 2025
Artificial intelligence (AI), particularly machine learning (ML) and deep learning (DL), has significant potential to advance the capabilities of nuclear neuroimaging. The current and emerging applications of ML and DL in the processing, analysis, enhancement and interpretation of SPECT and PET imaging are explored for brain imaging. Key developments include automated image segmentation, disease classification, and radiomic feature extraction, including lower dimensionality first and second order radiomics, higher dimensionality third order radiomics and more abstract fourth order deep radiomics. DL-based reconstruction, attenuation correction using pseudo-CT generation, and denoising of low-count studies have a role in enhancing image quality. AI has a role in sustainability through applications in radioligand design and preclinical imaging while federated learning addresses data security challenges to improve research and development in nuclear cerebral imaging. There is also potential for generative AI to transform the nuclear cerebral imaging space through solutions to data limitations, image enhancement, patient-centered care, workflow efficiencies and trainee education. Innovations in ML and DL are re-engineering the nuclear neuroimaging ecosystem and reimagining tomorrow's precision medicine landscape.

C2 pars interarticularis length on the side of high-riding vertebral artery with implications for pars screw insertion.

Klepinowski T, Kałachurska M, Chylewski M, Żyłka N, Taterra D, Łątka K, Pala B, Poncyljusz W, Sagan L

pubmed logopapersMay 28 2025
C2 pars interarticularis length (C2PIL) required for pars screws has not been thoroughly studied in subjects with high-riding vertebral artery (HRVA). We aimed to measure C2PIL specifically on the sides with HRVA, define short pars, optimal pars screw length, and incorporate C2PIL into HRVA clusters using machine learning algorithms. A clinical anatomical study based on cervical CT was conducted with STROBE-compliant case-control design. HRVA was defined as accepted. Interobserver, intraobserver, and inter-software agreement coefficients for HRVA were adopted from our previous study. Sample size was estimated with pwr package and C2PIL was measured. Cut-off value and predictive statistics of C2PIL for HRVA were computed with cutpointr package. Unsupervised machine learning clustering was applied with all three pars parameters. 345 potential screw insertion sites (PSIS) were grouped as HRVA (143 PSIS in 110 subjects) or controls (202 PSIS in 101 subjects). 68% participants were females. The median C2PIL in HRVA group was 13.7 mm with interquartile range (IQR) of 1.7, whereas in controls it was 19.8 mm (IQR = 2.7). The optimal cut-off value of C2PIL discriminating HRVA was 16.06 mm with sensitivity of 96.5% and specificity of 99.3%. Therefore, clinically important short pars was defined as ≤ 16 mm rounding to the nearest screw length. Two clusters were created incorportating three parameters of pars interarticularis. In preoperative planning, the identified C2PIL cut-off of ≤ 16 mm may assist surgeons in early recognition of HRVA. The average screw lengths of 14 mm for bicortical and 12 mm for safer unicortical purchase in HRVA cases may serve as practical intraoperative reference points, particularly in situations requiring rapid decision-making or when navigation systems are unavailable. Moreover, C2PIL complements the classic HRVA parameters within the dichotomized clustering framework.

Integrating SEResNet101 and SE-VGG19 for advanced cervical lesion detection: a step forward in precision oncology.

Ye Y, Chen Y, Pan J, Li P, Ni F, He H

pubmed logopapersMay 28 2025
Cervical cancer remains a significant global health issue, with accurate differentiation between low-grade (LSIL) and high-grade squamous intraepithelial lesions (HSIL) crucial for effective screening and management. Current methods, such as Pap smears and HPV testing, often fall short in sensitivity and specificity. Deep learning models hold the potential to enhance the accuracy of cervical cancer screening but require thorough evaluation to ascertain their practical utility. This study compares the performance of two advanced deep learning models, SEResNet101 and SE-VGG19, in classifying cervical lesions using a dataset of 3,305 high-quality colposcopy images. We assessed the models based on their accuracy, sensitivity, specificity, and area under the receiver operating characteristic curve (AUC). The SEResNet101 model demonstrated superior performance over SE-VGG19 across all evaluated metrics. Specifically, SEResNet101 achieved a sensitivity of 95%, a specificity of 97%, and an AUC of 0.98, compared to 89% sensitivity, 93% specificity, and an AUC of 0.94 for SE-VGG19. These findings suggest that SEResNet101 could significantly reduce both over- and under-treatment rates by enhancing diagnostic precision. Our results indicate that SEResNet101 offers a promising enhancement over existing screening methods, integrating advanced deep learning algorithms to significantly improve the precision of cervical lesion classification. This study advocates for the inclusion of SEResNet101 in clinical workflows to enhance cervical cancer screening protocols, thereby improving patient outcomes. Future work should focus on multicentric trials to validate these findings and facilitate widespread clinical adoption.

Efficient feature extraction using light-weight CNN attention-based deep learning architectures for ultrasound fetal plane classification.

Sivasubramanian A, Sasidharan D, Sowmya V, Ravi V

pubmed logopapersMay 28 2025
Ultrasound fetal imaging is beneficial to support prenatal development because it is affordable and non-intrusive. Nevertheless, fetal plane classification (FPC) remains challenging and time-consuming for obstetricians since it depends on nuanced clinical aspects, which increases the difficulty in identifying relevant features of the fetal anatomy. Thus, to assist with its accurate feature extraction, a lightweight artificial intelligence architecture leveraging convolutional neural networks and attention mechanisms is proposed to classify the largest benchmark ultrasound dataset. The approach fine-tunes from lightweight EfficientNet feature extraction backbones pre-trained on the ImageNet1k. to classify key fetal planes such as the brain, femur, thorax, cervix, and abdomen. Our methodology incorporates the attention mechanism to refine features and 3-layer perceptrons for classification, achieving superior performance with the highest Top-1 accuracy of 96.25%, Top-2 accuracy of 99.80% and F1-Score of 0.9576. Importantly, the model has 40x fewer trainable parameters than existing benchmark ensemble or transformer pipelines, facilitating easy deployment on edge devices to help clinical practitioners with real-time FPC. The findings are also interpreted using GradCAM to carry out clinical correlation to aid doctors with diagnostics and improve treatment plans for expectant mothers.

Deep learning radiomics fusion model to predict visceral pleural invasion of clinical stage IA lung adenocarcinoma: a multicenter study.

Zhao J, Wang T, Wang B, Satishkumar BM, Ding L, Sun X, Chen C

pubmed logopapersMay 28 2025
To assess the predictive performance, risk stratification capabilities, and auxiliary diagnostic utility of radiomics, deep learning, and fusion models in identifying visceral pleural invasion (VPI) in lung adenocarcinoma. A total of 449 patients (female:male, 263:186; 59.8 ± 10.5 years) diagnosed with clinical IA stage lung adenocarcinoma (LAC) from two distinct hospitals were enrolled in the study and divided into a training cohort (n = 289) and an external test cohort (n = 160). The fusion models were constructed from the feature level and the decision level respectively. A comprehensive analysis was conducted to assess the prediction ability and prognostic value of radiomics, deep learning, and fusion models. The diagnostic performance of radiologists of varying seniority with and without the assistance of the optimal model was compared. The late fusion model demonstrated superior diagnostic performance (AUC = 0.812) compared to clinical (AUC = 0.650), radiomics (AUC = 0.710), deep learning (AUC = 0.770), and the early fusion models (AUC = 0.586) in the external test cohort. The multivariate Cox regression analysis showed that the VPI status predicted by the late fusion model were independently associated with patient disease-free survival (DFS) (p = 0.044). Furthermore, model assistance significantly improved radiologist performance, particularly for junior radiologists; the AUC increased by 0.133 (p < 0.001) reaching levels comparable to the senior radiologist without model assistance (AUC: 0.745 vs. 0.730, p = 0.790). The proposed decision-level (late fusion) model significantly reducing the risk of overfitting and demonstrating excellent robustness in multicenter external validation, which can predict VPI status in LAC, aid in prognostic stratification, and assist radiologists in achieving higher diagnostic performance.
Page 142 of 1781774 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.