Sort by:
Page 35 of 1301294 results

Interpretable Machine Learning Radiomics Model Predicts 5-year Recurrence-Free Survival in Non-metastatic Clear Cell Renal Cell Carcinoma: A Multicenter and Retrospective Cohort Study.

Zhang J, Huang W, Li Y, Zhang X, Chen Y, Chen S, Ming Q, Jiang Q, Xv Y

pubmed logopapersJul 1 2025
To develop and validate a computed tomography (CT) radiomics-based interpretable machine learning (ML) model for predicting 5-year recurrence-free survival (RFS) in non-metastatic clear cell renal cell carcinoma (ccRCC). 559 patients with non-metastatic ccRCCs were retrospectively enrolled from eight independent institutes between March 2013 and January 2019, and were assigned to the primary set (n=271), external test set 1 (n=216), and external test set 2 (n=72). 1316 Radiomics features were extracted via "Pyradiomics." The least absolute shrinkage and selection operator algorithm was used for feature selection and Rad-Score construction. Patients were stratified into low and high 5-year recurrence risk groups based on Rad-Score, followed by Kaplan-Meier analyses. Five ML models integrating Rad-Score and clinicopathological risk factors were compared. Models' performances were evaluated via the discrimination, calibration, and decision curve analysis. The most robust ML model was interpreted using the SHapley Additive exPlanation (SHAP) method. 13 radiomic features were filtered to produce the Rad-Score, which predicted 5-year RFS with area under the receiver operating characteristic curve (AUCs) of 0.734-0.836. Kaplan-Meier analysis showed significant survival differences based on Rad-Score (all Log-Rank p values <0.05). The random forest model outperformed other models, obtaining AUCs of 0.826 [95% confidential interval (CI): 0.766-0.879] and 0.799 (95% CI: 0.670-0.899) in the external test set 1 and 2, respectively. The SHAP analysis suggested positive associations between contributing factors and 5-year RFS status in non-metastatic ccRCC. CT radiomics-based interpretable ML model can effectively predict 5-year RFS in non-metastatic ccRCC patients, distinguishing between low and high 5-year recurrence risks.

Exploring the Incremental Value of Aorta Enhancement Normalization Method in Evaluating Renal Cell Carcinoma Histological Subtypes: A Multi-center Large Cohort Study.

Huang Z, Wang L, Mei H, Liu J, Zeng H, Liu W, Yuan H, Wu K, Liu H

pubmed logopapersJul 1 2025
The classification of renal cell carcinoma (RCC) histological subtypes plays a crucial role in clinical diagnosis. However, traditional image normalization methods often struggle with discrepancies arising from differences in imaging parameters, scanning devices, and multi-center data, which can impact model robustness and generalizability. This study included 1628 patients with pathologically confirmed RCC who underwent nephrectomy across eight cohorts. These were divided into a training set, a validation set, external test dataset 1, and external test dataset 2. We proposed an "Aortic Enhancement Normalization" (AEN) method based on the lesion-to-aorta enhancement ratio and developed an automated lesion segmentation model along with a multi-scale CT feature extractor. Several machine learning algorithms, including Random Forest, LightGBM, CatBoost, and XGBoost, were used to build classification models and compare the performance of the AEN and traditional approaches for evaluating histological subtypes (clear cell renal cell carcinoma [ccRCC] vs. non-ccRCC). Additionally, we employed SHAP analysis to further enhance the transparency and interpretability of the model's decisions. The experimental results demonstrated that the AEN method outperformed the traditional normalization method across all four algorithms. Specifically, in the XGBoost model, the AEN method significantly improved performance in both internal and external validation sets, achieving AUROC values of 0.89, 0.81, and 0.80, highlighting its superior performance and strong generalizability. SHAP analysis revealed that multi-scale CT features played a critical role in the model's decision-making process. The proposed AEN method effectively reduces the impact of imaging parameter differences, significantly improving the robustness and generalizability of histological subtype (ccRCC vs. non-ccRCC) models. This approach provides new insights for multi-center data analysis and demonstrates promising clinical applicability.

Preoperative Prediction of STAS Risk in Primary Lung Adenocarcinoma Using Machine Learning: An Interpretable Model with SHAP Analysis.

Wang P, Cui J, Du H, Qian Z, Zhan H, Zhang H, Ye W, Meng W, Bai R

pubmed logopapersJul 1 2025
Accurate preoperative prediction of spread through air spaces (STAS) in primary lung adenocarcinoma (LUAD) is critical for optimizing surgical strategies and improving patient outcomes. To develop a machine learning (ML) based model to predict STAS using preoperative CT imaging features and clinicopathological data, while enhancing interpretability through shapley additive explanations (SHAP) analysis. This multicenter retrospective study included 1237 patients with pathologically confirmed primary LUAD from three hospitals. Patients from Center 1 (n=932) were divided into a training set (n=652) and an internal test set (n=280). Patients from Centers 2 (n=165) and 3 (n=140) formed external validation sets. CT imaging features and clinical variables were selected using Boruta and least absolute shrinkage and selection operator regression. Seven ML models were developed and evaluated using five-fold cross-validation. Performance was assessed using F1 score, recall, precision, specificity, sensitivity, and area under the receiver operating characteristic curve (AUC). The Extreme Gradient Boosting (XGB) model achieved AUCs of 0.973 (training set), 0.862 (internal test set), and 0.842/0.810 (external validation sets). SHAP analysis identified nodule type, carcinoembryonic antigen, maximum nodule diameter, and lobulated sign as key features for predicting STAS. Logistic regression analysis confirmed these as independent risk factors. The XGB model demonstrated high predictive accuracy and interpretability for STAS. By integrating widely available clinical and imaging features, this model offers a practical and effective tool for preoperative risk stratification, supporting personalized surgical planning in primary LUAD management.

Multiparametric MRI-based Interpretable Machine Learning Radiomics Model for Distinguishing Between Luminal and Non-luminal Tumors in Breast Cancer: A Multicenter Study.

Zhou Y, Lin G, Chen W, Chen Y, Shi C, Peng Z, Chen L, Cai S, Pan Y, Chen M, Lu C, Ji J, Chen S

pubmed logopapersJul 1 2025
To construct and validate an interpretable machine learning (ML) radiomics model derived from multiparametric magnetic resonance imaging (MRI) images to differentiate between luminal and non-luminal breast cancer (BC) subtypes. This study enrolled 1098 BC participants from four medical centers, categorized into a training cohort (n = 580) and validation cohorts 1-3 (n = 252, 89, and 177, respectively). Multiparametric MRI-based radiomics features, including T1-weighted imaging (T1WI), T2-weighted imaging (T2WI), diffusion-weighted imaging (DWI), apparent diffusion coefficient (ADC), and dynamic contrast-enhanced (DCE) imaging, were extracted. Five ML algorithms were applied to develop various radiomics models, from which the best performing model was identified. A ML-based combined model including optimal radiomics features and clinical predictors was constructed, with performance assessed through receiver operating characteristic (ROC) analysis. The Shapley additive explanation (SHAP) method was utilized to assess model interpretability. Tumor size and MR-reported lymph node status were chosen as significant clinical variables. Thirteen radiomics features were identified from multiparametric MRI images. The extreme gradient boosting (XGBoost) radiomics model performed the best, achieving area under the curves (AUCs) of 0.941, 0.903, 0.862, and 0.894 across training and validation cohorts 1-3, respectively. The XGBoost combined model showed favorable discriminative power, with AUCs of 0.956, 0.912, 0.894, and 0.906 in training and validation cohorts 1-3, respectively. The SHAP visualization facilitated global interpretation, identifying "ADC_wavelet-HLH_glszm_ZoneEntropy" and "DCE_wavelet-HLL_gldm_DependenceVariance" as the most significant features for the model's predictions. The XGBoost combined model derived from multiparametric MRI may proficiently differentiate between luminal and non-luminal BC and aid in treatment decision-making. An interpretable machine learning radiomics model can preoperatively predict luminal and non-luminal subtypes in breast cancer, thereby aiding therapeutic decision-making.

Perilesional dominance: radiomics of multiparametric MRI enhances differentiation of IgG4-Related ophthalmic disease and orbital MALT lymphoma.

Li J, Zhou C, Qu X, Du L, Yuan Q, Han Q, Xian J

pubmed logopapersJul 1 2025
To develop and validate a diagnostic framework integrating intralesional (ILN) and perilesional (PLN) radiomics derived from multiparametric MRI (mpMRI) for distinguishing IgG4-related ophthalmic disease (IgG4-ROD) from orbital mucosa-associated lymphoid tissue (MALT) lymphoma. This multicenter retrospective study analyzed 214 histopathologically confirmed cases (68 IgG4-ROD, 146 MALT lymphoma) from two institutions (2019-2024). A LASSO-SVM classifier was optimized through comparative evaluation of seven machine learning models, incorporating fused radiomic features (1,197 features) from ILN/PLN regions. Diagnostic performance was benchmarked against two subspecialty radiologists (10-20 years' experience) using receiver operating characteristics - area under the curve (AUC), precision-recall AUC (PR-AUC), and decision curve analysis (DCA), adhering to CLEAR/METRICS guidelines. The fusion model (FR_RAD) achieved state-of-the-art performance, with an AUC of 0.927 (95% CI 0.902-0.958) and a PR-AUC of 0.901 (95% CI 0.862-0.940) in the training set, and an AUC of 0.907 (95% CI 0.857-0.965) and a PR-AUC of 0.872 (95% CI 0.820-0.924) on external testing. In contrast, subspecialty radiologists achieved lower AUCs of 0.671-0.740 (95% CI 0.630-0.780) and PR-AUCs of 0.553-0.632 (95% CI 0.521-0.664) (all p < 0.001). FR_RAD also outperformed radiologists in accuracy (88.6% vs. 66.2% and 71.3%; p < 0.01). DCA demonstrated a net benefit of 0.18 at a high-risk threshold of 30%, equivalent to avoiding 18 unnecessary biopsies per 100 cases. The fusion model integrating multi-regional radiomics from mpMRI achieves precise differentiation between IgG4-ROD and orbital MALT lymphoma, outperforming subspecialty radiologists. This approach highlights the transformative potential of spatial radiomics analysis in resolving diagnostic uncertainties and reducing reliance on invasive procedures for orbital lesion characterization.

Deep learning for gender estimation using hand radiographs: a comparative evaluation of CNN models.

Ulubaba HE, Atik İ, Çiftçi R, Eken Ö, Aldhahi MI

pubmed logopapersJul 1 2025
Accurate gender estimation plays a crucial role in forensic identification, especially in mass disasters or cases involving fragmented or decomposed remains where traditional skeletal landmarks are unavailable. This study aimed to develop a deep learning-based model for gender classification using hand radiographs, offering a rapid and objective alternative to conventional methods. We analyzed 470 left-hand X-ray images from adults aged 18 to 65 years using four convolutional neural network (CNN) architectures: ResNet-18, ResNet-50, InceptionV3, and EfficientNet-B0. Following image preprocessing and data augmentation, models were trained and validated using standard classification metrics: accuracy, precision, recall, and F1 score. Data augmentation included random rotation, horizontal flipping, and brightness adjustments to enhance model generalization. Among the tested models, ResNet-50 achieved the highest classification accuracy (93.2%) with precision of 92.4%, recall of 93.3%, and F1 score of 92.5%. While other models demonstrated acceptable performance, ResNet-50 consistently outperformed them across all metrics. These findings suggest CNNs can reliably extract sexually dimorphic features from hand radiographs. Deep learning approaches, particularly ResNet-50, provide a robust, scalable, and efficient solution for gender prediction from hand X-ray images. This method may serve as a valuable tool in forensic scenarios where speed and reliability are critical. Future research should validate these findings across diverse populations and incorporate explainable AI techniques to enhance interpretability.

Attention-driven hybrid deep learning and SVM model for early Alzheimer's diagnosis using neuroimaging fusion.

Paduvilan AK, Livingston GAL, Kuppuchamy SK, Dhanaraj RK, Subramanian M, Al-Rasheed A, Getahun M, Soufiene BO

pubmed logopapersJul 1 2025
Alzheimer's Disease (AD) poses a significant global health challenge, necessitating early and accurate diagnosis to enable timely interventions. AD is a progressive neurodegenerative disorder that affects millions worldwide and is one of the leading causes of cognitive impairment in older adults. Early diagnosis is critical for enabling effective treatment strategies, slowing disease progression, and improving the quality of life for patients. Existing diagnostic methods often struggle with limited sensitivity, overfitting, and reduced reliability due to inadequate feature extraction, imbalanced datasets, and suboptimal model architectures. This study addresses these gaps by introducing an innovative methodology that combines SVM with Deep Learning (DL) to improve the classification performance of AD. Deep learning models extract high-level imaging features which are then concatenated with SVM kernels in a late-fusion ensemble. This hybrid design leverages deep representations for pattern recognition and SVM's robustness on small sample sets. This study provides a necessary tool for early-stage identification of possible cases, so enhancing the management and treatment options. This is attained by precisely classifying the disease from neuroimaging data. The approach integrates advanced data pre-processing, dynamic feature optimization, and attention-driven learning mechanisms to enhance interpretability and robustness. The research leverages a dataset of MRI and PET imaging, integrating novel fusion techniques to extract key biomarkers indicative of cognitive decline. Unlike prior approaches, this method effectively mitigates the challenges of data sparsity and dimensionality reduction while improving generalization across diverse datasets. Comparative analysis highlights a 15% improvement in accuracy, a 12% reduction in false positives, and a 10% increase in F1-score against state-of-the-art models such as HNC and MFNNC. The proposed method significantly outperforms existing techniques across metrics like accuracy, sensitivity, specificity, and computational efficiency, achieving an overall accuracy of 98.5%.

Deep Learning and Radiomics Discrimination of Coronary Chronic Total Occlusion and Subtotal Occlusion using CTA.

Zhou Z, Bo K, Gao Y, Zhang W, Zhang H, Chen Y, Chen Y, Wang H, Zhang N, Huang Y, Mao X, Gao Z, Zhang H, Xu L

pubmed logopapersJul 1 2025
Coronary chronic total occlusion (CTO) and subtotal occlusion (STO) pose diagnostic challenges, differing in treatment strategies. Artificial intelligence and radiomics are promising tools for accurate discrimination. This study aimed to develop deep learning (DL) and radiomics models using coronary computed tomography angiography (CCTA) to differentiate CTO from STO lesions and compare their performance with that of the conventional method. CTO and STO were identified retrospectively from a tertiary hospital and served as training and validation sets for developing and validating the DL and radiomics models to distinguish CTO from STO. An external test cohort was recruited from two additional tertiary hospitals with identical eligibility criteria. All participants underwent CCTA within 1 month before invasive coronary angiography. A total of 581 participants (mean age, 50 years ± 11 [SD]; 474 [81.6%] men) with 600 lesions were enrolled, including 403 CTO and 197 STO lesions. The DL and radiomics models exhibited better discrimination performance than the conventional method, with areas under the curve of 0.908 and 0.860, respectively, vs. 0.794 in the validation set (all p<0.05), and 0.893 and 0.827, respectively, vs. 0.746 in the external test set (all p<0.05). The proposed CCTA-based DL and radiomics models achieved efficient and accurate discrimination of coronary CTO and STO.

Stratifying trigeminal neuralgia and characterizing an abnormal property of brain functional organization: a resting-state fMRI and machine learning study.

Wu M, Qiu J, Chen Y, Jiang X

pubmed logopapersJul 1 2025
Increasing evidence suggests that primary trigeminal neuralgia (TN), including classical TN (CTN) and idiopathic TN (ITN), share biological, neuropsychological, and clinical features, despite differing diagnostic criteria. Neuroimaging studies have shown neurovascular compression (NVC) differences in these disorders. However, changes in brain dynamics across these two TN subtypes remain unknown. The authors aimed to examine the functional connectivity differences in CTN, ITN, and pain-free controls. A total of 93 subjects, 50 TN patients and 43 pain-free controls, underwent resting-state functional magnetic resonance imaging (rs-fMRI). All TN patients underwent surgery, and the NVC type was verified. Functional connectivity and spontaneous brain activity were analyzed, and the significant alterations in rs-fMRI indices were selected to train classification models. The patients with TN showed increased connectivity between several brain regions, such as the medial prefrontal cortex (mPFC) and left planum temporale and decreased connectivity between the mPFC and left superior frontal gyrus. CTN patients exhibited a further reduction in connectivity between the left insular lobe and left occipital pole. Compared to controls, TN patients had heightened neural activity in the frontal regions. The CTN patients showed reduced activity in the right temporal pole compared to that in the ITN patients. These patterns effectively distinguished TN patients from controls, with an accuracy of 74.19% and an area under the receiver operating characteristic curve of 0.80. This study revealed alterations in rs-fMRI metrics in TN patients compared to those in controls and is the first to show differences between CTN and ITN. The support vector machine model of rs-fMRI indices exhibited moderate performance on discriminating TN patients from controls. These findings have unveiled potential biomarkers for TN and its subtypes, which can be used for additional investigation of the pathophysiology of the disease.

A Novel Visual Model for Predicting Prognosis of Resected Hepatoblastoma: A Multicenter Study.

He Y, An C, Dong K, Lyu Z, Qin S, Tan K, Hao X, Zhu C, Xiu W, Hu B, Xia N, Wang C, Dong Q

pubmed logopapersJul 1 2025
This study aimed to evaluate the application of a contrast-enhanced CT-based visual model in predicting postoperative prognosis in patients with hepatoblastoma (HB). We analyzed data from 224 patients across three centers (178 in the training cohort, 46 in the validation cohort). Visual features were extracted from contrast-enhanced CT images, and key features, along with clinicopathological data, were identified using LASSO Cox regression. Visual (DINOv2_score) and clinical (Clinical_score) models were developed, and a combined model integrating DINOv2_score and clinical risk factors was constructed. Nomograms were created for personalized risk assessment, with calibration curves and decision curve analysis (DCA) used to evaluate model performance. The DINOv2_score was recognized as a key prognostic indicator for HB. In both the training and validation cohorts, the combined model demonstrated superior performance in predicting disease-free survival (DFS) [C-index (95% CI): 0.886 (0.879-0.895) and 0.873 (0.837-0.909), respectively] and overall survival (OS) [C-index (95% CI): 0.887 (0.877-0.897) and 0.882 (0.858-0.906), respectively]. Calibration curves showed strong alignment between predicted and observed outcomes, while DCA demonstrated that the combined model provided greater clinical net benefit than the clinical or visual models alone across a range of threshold probabilities. The contrast-enhanced CT-based visual model serves as an effective tool for predicting postoperative prognosis in HB patients. The combined model, integrating the DINOv2_score and clinical risk factors, demonstrated superior performance in survival prediction, offering more precise guidance for personalized treatment strategies.
Page 35 of 1301294 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.