Sort by:
Page 153 of 2432424 results

Exploring the Incremental Value of Aorta Enhancement Normalization Method in Evaluating Renal Cell Carcinoma Histological Subtypes: A Multi-center Large Cohort Study.

Huang Z, Wang L, Mei H, Liu J, Zeng H, Liu W, Yuan H, Wu K, Liu H

pubmed logopapersJul 1 2025
The classification of renal cell carcinoma (RCC) histological subtypes plays a crucial role in clinical diagnosis. However, traditional image normalization methods often struggle with discrepancies arising from differences in imaging parameters, scanning devices, and multi-center data, which can impact model robustness and generalizability. This study included 1628 patients with pathologically confirmed RCC who underwent nephrectomy across eight cohorts. These were divided into a training set, a validation set, external test dataset 1, and external test dataset 2. We proposed an "Aortic Enhancement Normalization" (AEN) method based on the lesion-to-aorta enhancement ratio and developed an automated lesion segmentation model along with a multi-scale CT feature extractor. Several machine learning algorithms, including Random Forest, LightGBM, CatBoost, and XGBoost, were used to build classification models and compare the performance of the AEN and traditional approaches for evaluating histological subtypes (clear cell renal cell carcinoma [ccRCC] vs. non-ccRCC). Additionally, we employed SHAP analysis to further enhance the transparency and interpretability of the model's decisions. The experimental results demonstrated that the AEN method outperformed the traditional normalization method across all four algorithms. Specifically, in the XGBoost model, the AEN method significantly improved performance in both internal and external validation sets, achieving AUROC values of 0.89, 0.81, and 0.80, highlighting its superior performance and strong generalizability. SHAP analysis revealed that multi-scale CT features played a critical role in the model's decision-making process. The proposed AEN method effectively reduces the impact of imaging parameter differences, significantly improving the robustness and generalizability of histological subtype (ccRCC vs. non-ccRCC) models. This approach provides new insights for multi-center data analysis and demonstrates promising clinical applicability.

Preoperative Prediction of STAS Risk in Primary Lung Adenocarcinoma Using Machine Learning: An Interpretable Model with SHAP Analysis.

Wang P, Cui J, Du H, Qian Z, Zhan H, Zhang H, Ye W, Meng W, Bai R

pubmed logopapersJul 1 2025
Accurate preoperative prediction of spread through air spaces (STAS) in primary lung adenocarcinoma (LUAD) is critical for optimizing surgical strategies and improving patient outcomes. To develop a machine learning (ML) based model to predict STAS using preoperative CT imaging features and clinicopathological data, while enhancing interpretability through shapley additive explanations (SHAP) analysis. This multicenter retrospective study included 1237 patients with pathologically confirmed primary LUAD from three hospitals. Patients from Center 1 (n=932) were divided into a training set (n=652) and an internal test set (n=280). Patients from Centers 2 (n=165) and 3 (n=140) formed external validation sets. CT imaging features and clinical variables were selected using Boruta and least absolute shrinkage and selection operator regression. Seven ML models were developed and evaluated using five-fold cross-validation. Performance was assessed using F1 score, recall, precision, specificity, sensitivity, and area under the receiver operating characteristic curve (AUC). The Extreme Gradient Boosting (XGB) model achieved AUCs of 0.973 (training set), 0.862 (internal test set), and 0.842/0.810 (external validation sets). SHAP analysis identified nodule type, carcinoembryonic antigen, maximum nodule diameter, and lobulated sign as key features for predicting STAS. Logistic regression analysis confirmed these as independent risk factors. The XGB model demonstrated high predictive accuracy and interpretability for STAS. By integrating widely available clinical and imaging features, this model offers a practical and effective tool for preoperative risk stratification, supporting personalized surgical planning in primary LUAD management.

Multiparametric MRI-based Interpretable Machine Learning Radiomics Model for Distinguishing Between Luminal and Non-luminal Tumors in Breast Cancer: A Multicenter Study.

Zhou Y, Lin G, Chen W, Chen Y, Shi C, Peng Z, Chen L, Cai S, Pan Y, Chen M, Lu C, Ji J, Chen S

pubmed logopapersJul 1 2025
To construct and validate an interpretable machine learning (ML) radiomics model derived from multiparametric magnetic resonance imaging (MRI) images to differentiate between luminal and non-luminal breast cancer (BC) subtypes. This study enrolled 1098 BC participants from four medical centers, categorized into a training cohort (n = 580) and validation cohorts 1-3 (n = 252, 89, and 177, respectively). Multiparametric MRI-based radiomics features, including T1-weighted imaging (T1WI), T2-weighted imaging (T2WI), diffusion-weighted imaging (DWI), apparent diffusion coefficient (ADC), and dynamic contrast-enhanced (DCE) imaging, were extracted. Five ML algorithms were applied to develop various radiomics models, from which the best performing model was identified. A ML-based combined model including optimal radiomics features and clinical predictors was constructed, with performance assessed through receiver operating characteristic (ROC) analysis. The Shapley additive explanation (SHAP) method was utilized to assess model interpretability. Tumor size and MR-reported lymph node status were chosen as significant clinical variables. Thirteen radiomics features were identified from multiparametric MRI images. The extreme gradient boosting (XGBoost) radiomics model performed the best, achieving area under the curves (AUCs) of 0.941, 0.903, 0.862, and 0.894 across training and validation cohorts 1-3, respectively. The XGBoost combined model showed favorable discriminative power, with AUCs of 0.956, 0.912, 0.894, and 0.906 in training and validation cohorts 1-3, respectively. The SHAP visualization facilitated global interpretation, identifying "ADC_wavelet-HLH_glszm_ZoneEntropy" and "DCE_wavelet-HLL_gldm_DependenceVariance" as the most significant features for the model's predictions. The XGBoost combined model derived from multiparametric MRI may proficiently differentiate between luminal and non-luminal BC and aid in treatment decision-making. An interpretable machine learning radiomics model can preoperatively predict luminal and non-luminal subtypes in breast cancer, thereby aiding therapeutic decision-making.

Innovative deep learning classifiers for breast cancer detection through hybrid feature extraction techniques.

Vijayalakshmi S, Pandey BK, Pandey D, Lelisho ME

pubmed logopapersJul 1 2025
Breast cancer remains a major cause of mortality among women, where early and accurate detection is critical to improving survival rates. This study presents a hybrid classification approach for mammogram analysis by combining handcrafted statistical features and deep learning techniques. The methodology involves preprocessing with the Shearlet Transform, segmentation using Improved Otsu thresholding and Canny edge detection, followed by feature extraction through Gray Level Co-occurrence Matrix (GLCM), Gray Level Run Length Matrix (GLRLM), and 1st-order statistical descriptors. These features are input into a 2D BiLSTM-CNN model designed to learn spatial and sequential patterns in mammogram images. Evaluated on the MIAS dataset, the proposed method achieved 97.14% accuracy, outperforming several benchmark models. The results indicate that this hybrid strategy offers improvements in classification performance and may assist radiologists in more effective breast cancer screening.

Leveraging Representation Learning for Bi-parametric Prostate MRI to Disambiguate PI-RADS 3 and Improve Biopsy Decision Strategies.

Umapathy L, Johnson PM, Dutt T, Tong A, Chopra S, Sodickson DK, Chandarana H

pubmed logopapersJun 30 2025
Despite its high negative predictive value (NPV) for clinically significant prostate cancer (csPCa), MRI suffers from a substantial number of false positives, especially for intermediate-risk cases. In this work, we determine whether a deep learning model trained with PI-RADS-guided representation learning can disambiguate the PI-RADS 3 classification, detect csPCa from bi-parametric prostate MR images, and avoid unnecessary benign biopsies. This study included 28,263 MR examinations and radiology reports from 21,938 men imaged for known or suspected prostate cancer between 2015 and 2023 at our institution (21 imaging locations with 34 readers), with 6352 subsequent biopsies. We trained a deep learning model, a representation learner (RL), to learn how radiologists interpret conventionally acquired T2-weighted and diffusion-weighted MR images, using exams in which the radiologists are confident in their risk assessments (PI-RADS 1 and 2 for the absence of csPCa vs. PI-RADS 4 and 5 for the presence of csPCa, n=21,465). We then trained biopsy-decision models to detect csPCa (Gleason score ≥7) using these learned image representations, and compared them to the performance of radiologists, and of models trained on other clinical variables (age, prostate volume, PSA, and PSA density) for treatment-naïve test cohorts consisting of only PI-RADS 3 (n=253, csPCa=103) and all PI-RADS (n=531, csPCa=300) cases. On the 2 test cohorts (PI-RADS-3-only, all-PI-RADS), RL-based biopsy-decision models consistently yielded higher AUCs in detecting csPCa (AUC=0.73 [0.66, 0.79], 0.88 [0.85, 0.91]) compared with radiologists (equivocal, AUC=0.79 [0.75, 0.83]) and the clinical model (AUCs=0.69 [0.62, 0.75], 0.78 [0.74, 0.82]). In the PIRADS-3-only cohort, all of whom would be biopsied using our institution's standard of care, the RL decision model avoided 41% (62/150) of benign biopsies compared with the clinical model (26%, P<0.001), and improved biopsy yield by 10% compared with the PI-RADS ≥3 decision strategy (0.50 vs. 0.40). Furthermore, on the all-PI-RADS cohort, RL decision model avoided 27% of additional benign biopsies (138/231) compared to radiologists (33%, P<0.001) with comparable sensitivity (93% vs. 92%), higher NPV (0.87 vs. 0.77), and biopsy yield (0.75 vs. 0.64). The combination of clinical and RL decision models further avoided benign biopsies (46% in PI-RADS-3-only and 62% in all-PI-RADS) while improving NPV (0.82, 0.88) and biopsy yields (0.52, 0.76) across the 2 test cohorts. Our PI-RADS-guided deep learning RL model learns summary representations from bi-parametric prostate MR images that can provide additional information to disambiguate intermediate-risk PI-RADS 3 assessments. The resulting RL-based biopsy decision models also outperformed radiologists in avoiding benign biopsies while maintaining comparable sensitivity to csPCa for the all-PI-RADS cohort. Such AI models can easily be integrated into clinical practice to supplement radiologists' reads in general and improve biopsy yield for any equivocal decisions.

Improving Robustness and Reliability in Medical Image Classification with Latent-Guided Diffusion and Nested-Ensembles.

Shen X, Huang H, Nichyporuk B, Arbel T

pubmed logopapersJun 30 2025
Once deployed, medical image analysis methods are often faced with unexpected image corruptions and noise perturbations. These unknown covariate shifts present significant challenges to deep learning based methods trained on "clean" images. This often results in unreliable predictions and poorly calibrated confidence, hence hindering clinical applicability. While recent methods have been developed to address specific issues such as confidence calibration or adversarial robustness, no single framework effectively tackles all these challenges simultaneously. To bridge this gap, we propose LaDiNE, a novel ensemble learning method combining the robustness of Vision Transformers with diffusion-based generative models for improved reliability in medical image classification. Specifically, transformer encoder blocks are used as hierarchical feature extractors that learn invariant features from images for each ensemble member, resulting in features that are robust to input perturbations. In addition, diffusion models are used as flexible density estimators to estimate member densities conditioned on the invariant features, leading to improved modeling of complex data distributions while retaining properly calibrated confidence. Extensive experiments on tuberculosis chest X-rays and melanoma skin cancer datasets demonstrate that LaDiNE achieves superior performance compared to a wide range of state-of-the-art methods by simultaneously improving prediction accuracy and confidence calibration under unseen noise, adversarial perturbations, and resolution degradation.

BIScreener: enhancing breast cancer ultrasound diagnosis through integrated deep learning with interpretability.

Chen Y, Wang P, Ouyang J, Tan M, Nie L, Zhang Y, Wang T

pubmed logopapersJun 30 2025
Breast cancer is the leading cause of death among women worldwide, and early detection through the standardized BI-RADS framework helps physicians assess the risk of malignancy and guide appropriate diagnostic and treatment decisions. In this study, an interpretable deep learning model (BIScreener) was proposed for predicting BI-RADS classifications from breast ultrasound images, aiding in the accurate assessment of breast cancer risk and improving diagnostic efficiency. BIScreener utilizes the stacked generalization of three pretrained convolutional neural networks to analyze ultrasound images obtained from two specific instruments (Mindray R5 and HITACHI) used at local hospitals. BIScreener achieved a classification total accuracy of 90.0% and ROC-AUC value of 0.982 in the external test set for five BI-RADS categories. The proposed method achieved 83.8% classification total accuracy and 0.967 ROC-AUC value for seven BI-RADS categories. In addition, the model improved the diagnostic accuracy of two radiologists by more than 8.1% for five BI-RADS categories and by more than 4.8% for seven BI-RADS categories and reduced the explanation time by more than 19.0%, demonstrating its potential to accelerate and improve the breast cancer diagnosis process.

Ultrasound Radio Frequency Time Series for Tissue Typing: Experiments on In-Vivo Breast Samples Using Texture-Optimized Features and Multi-Origin Method of Classification (MOMC).

Arab M, Fallah A, Rashidi S, Dastjerdi MM, Ahmadinejad N

pubmed logopapersJun 30 2025
One of the most promising auxiliaries for screening breast cancer (BC) is ultrasound (US) radio-frequency (RF) time series. It has the superiority of not requiring any supplementary equipment over other methods. This article sought to propound a machine learning (ML) method for the automated categorization of breast lesions-categorized as benign, probably benign, suspicious, or malignant-using features extracted from the accumulated US RF time series. In this research, 220 data points of the categories as mentioned earlier, recorded from 118 patients, were analyzed. The RFTSBU dataset was registered by a SuperSonic Imagine Aixplorer® medical/research system fitted with a linear transducer. The expert radiologist manually selected regions of interest (ROIs) in B-mode images before extracting 283 features from each ROI in the ML approach, utilizing textural features such as Gabor filter (GF), gray-level co-occurrence matrix (GLCM), gray-level run-length matrix (GLRLM), gray-level size zone matrix (GLSZM), and gray-level dependence matrix (GLDM). Subsequently, the particle swarm optimization (PSO) narrowed the features to 131 highly effective ones. Ultimately, the features underwent classification using an innovative multi-origin method classification (MOMC), marking a significant leap in BC diagnosis. Employing 5-fold cross-validation, the study achieved notable accuracy rates of 98.57 ± 1.09%, 91.53 ± 0.89%, and 83.71 ± 1.30% for 2-, 3-, and 4-class classifications, respectively, using MOMC-SVM and MOMC-ensemble classifiers. This research introduces an innovative ML-based approach to differentiate between diverse breast lesion types using in vivo US RF time series data. The findings underscore its efficacy in enhancing classification accuracy, promising significant strides in computer-aided diagnosis (CAD) for BC screening.

Development and validation of a prognostic prediction model for lumbar-disc herniation based on machine learning and fusion of clinical text data and radiomic features.

Wang Z, Zhang H, Li Y, Zhang X, Liu J, Ren Z, Qin D, Zhao X

pubmed logopapersJun 30 2025
Based on preoperative clinical text data and lumbar magnetic resonance imaging (MRI), we applied machine learning (ML) algorithms to construct a model that would predict early recurrence in lumbar-disc herniation (LDH) patients who underwent percutaneous endoscopic lumbar discectomy (PELD). We then explored the clinical performance of this prognostic prediction model via multimodal-data fusion. Clinical text data and radiological images of LDH patients who underwent PELD at the Intervertebral Disc Center of the Affiliated Hospital of Gansu University of Traditional Chinese Medicine (AHGUTCM; Lanzhou, China) were retrospectively collected. Two radiologists with clinical-image reading experience independently outlined regions of interest (ROI) on the MRI images and extracted radiomic features using 3D Slicer software. We then randomly separated the samples into a training set and a test set at a 7:3 ratio, used eight ML algorithms to construct predictive radiomic-feature models, evaluated model performance by the area under the curve (AUC), and selected the optimal model for screening radiomic features and calculating radiomic scores (Rad-scores). Finally, after using logistic regression to construct a nomogram for predicting the early-recurrence rate, we evaluated the nomogram's clinical applicability using a clinical-decision curve. We initially extracted 851 radiomic features. After constructing our models, we determined based on AUC values that the optimal ML algorithm was least absolute shrinkage and selection operator (LASSO) regression, which had an AUC of 0.76 and an accuracy rate of 91%. After screening features using the LASSO model, we predicted Rad-score for each sample of recurrent LDH using nine radiomic features. Next, we fused three of these clinical features -age, diabetes, and heavy manual labor-to construct a nomogram with an AUC of 0.86 (95% confidence interval [CI], 0.79-0.94). Analysis of the clinical-decision and impact curves showed that the prognostic prediction model with multimodal-data fusion had good clinical validity and applicability. We developed and analyzed a prognostic prediction model for LDH with multimodal-data fusion. Our model demonstrated good performance in predicting early postoperative recurrence in LDH patients; therefore, it has good prospects for clinical application and can provide clinicians with objective, accurate information to help them decide on presurgical treatment plans. However, external-validation studies are still needed to further validate the model's comprehensive performance and improve its generalization and extrapolation.

Enhancing weakly supervised data augmentation networks for thyroid nodule assessment using traditional and doppler ultrasound images.

Keatmanee C, Songsaeng D, Klabwong S, Nakaguro Y, Kunapinun A, Ekpanyapong M, Dailey MN

pubmed logopapersJun 30 2025
Thyroid ultrasound (US) is an essential tool for detecting and characterizing thyroid nodules. In this study, we propose an innovative approach to enhance thyroid nodule assessment by integrating Doppler US images with grayscale US images through weakly supervised data augmentation networks (WSDAN). Our method reduces background noise by replacing inefficient augmentation strategies, such as random cropping, with an advanced technique guided by bounding boxes derived from Doppler US images. This targeted augmentation significantly improves model performance in both classification and localization of thyroid nodules. The training dataset comprises 1288 paired grayscale and Doppler US images, with an additional 190 pairs used for three-fold cross-validation. To evaluate the model's efficacy, we tested it on a separate set of 190 grayscale US images. Compared to five state-of-the-art models and the original WSDAN, our Enhanced WSDAN model achieved superior performance. For classification, it reached an accuracy of 91%. For localization, it achieved Dice and Jaccard indices of 75% and 87%, respectively, demonstrating its potential as a valuable clinical tool.
Page 153 of 2432424 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.