Sort by:
Page 535 of 7577568 results

Belue MJ, Mukhtar V, Ram R, Gokden N, Jose J, Massey JL, Biben E, Buddha S, Langford T, Shah S, Harmon SA, Turkbey B, Aydin AM

pubmed logopapersJul 1 2025
Prostate imaging reporting and data systems (PI-RADS) experiences considerable variability in inter-reader performance. Artificial Intelligence (AI) algorithms were suggested to provide comparable performance to PI-RADS for assessing prostate cancer (PCa) risk, albeit tested in highly selected cohorts. This study aimed to assess an AI algorithm for PCa detection in a clinical practice setting and simulate integration of the AI model with PI-RADS for assessment of indeterminate PI-RADS 3 lesions. This retrospective cohort study externally validated a biparametric MRI-based AI model for PCa detection in a consecutive cohort of patients who underwent prostate MRI and subsequently targeted and systematic prostate biopsy at a urology clinic between January 2022 and March 2024. Radiologist interpretations followed PI-RADS v2.1, and biopsies were conducted per PI-RADS scores. The previously developed AI model provided lesion segmentations and cancer probability maps which were compared to biopsy results. Additionally, we conducted a simulation to adjust biopsy thresholds for index PI-RADS category 3 studies, where AI predictions within these studies upgraded them to PI-RADS category 4. Among 144 patients with a median age of 70 years and PSA density of 0.17ng/mL/cc, AI's sensitivity for detection of PCa (86.6%) and clinically significant PCa (csPCa, 88.4%) was comparable to radiologists (85.7%, p=0.84, and 89.5%, p=0.80, respectively). The simulation combining radiologist and AI evaluations improved clinically significant PCa sensitivity by 5.8% (p=0.025). The combination of AI, PI-RADS and PSA density provided the best diagnostic performance for csPCa (area under the curve [AUC]=0.76). The AI algorithm demonstrated comparable PCa detection rates to PI-RADS. The combination of AI with radiologist interpretation improved sensitivity and could be instrumental in assessment of low-risk and indeterminate PI-RADS lesions. The role of AI in PCa screening remains to be further elucidated.

Yu Z, Du Y, Pang H, Li X, Liu Y, Bu S, Wang J, Zhao M, Ren Z, Li X, Yao L

pubmed logopapersJul 1 2025
Cognitive decline is common in End-Stage Renal Disease (ESRD) patients, yet its neural mechanisms are poorly understood. This study investigates structural and functional brain network reconfiguration in ESRD patients transitioning to Mild Cognitive Impairment (MCI) and evaluates its potential for predicting MCI risk. We enrolled 90 ESRD patients with 2-year follow-up, categorized as MCI converters (MCI_C, n=48) and non-converters (MCI_NC, n=42). Brain networks were constructed using baseline rs-fMRI and high angular resolution diffusion imaging, focusing on regional structural-functional coupling (SFC). A Support Vector Machine (SVM) model was used to identify brain regions associated with cognitive decline. Mediation analysis was conducted to explore the relationship between kidney function, brain network reconfiguration, and cognition. MCI_C patients showed decreased network efficiency in the structural network and compensatory changes in the functional network. Machine learning models using multimodal network features predicted MCI with high accuracy (AUC=0.928 for training set, AUC=0.903 for test set). SHAP analysis indicated that reduced hippocampal SFC was the most significant predictor of MCI_C. Mediation analysis revealed that altered brain network topology, particularly hippocampal SFC, mediated the relationship between kidney dysfunction and cognitive decline. This study provides new insights into the link between kidney function and cognition, offering potential clinical applications for structural and functional MRI biomarkers.

Biswas S, Chohan DP, Wankhede M, Rodrigues J, Bhat G, Mathew S, Mahato KK

pubmed logopapersJul 1 2025
Colorectal cancer remains a major global health challenge, emphasizing the need for advanced diagnostic tools that enable early and accurate detection. Photoacoustic (PA) spectroscopy, a hybrid technique combining optical absorption with acoustic resolution, is emerging as a powerful tool in cancer diagnostics. It detects biochemical changes in biomolecules within the tumor microenvironment, aiding early identification of malignancies. Integration with modalities, such as ultrasound (US), photoacoustic microscopy (PAM), and nanoparticle-enhanced imaging, enables detailed mapping of tissue structure, vascularity, and molecular markers. When combined with endoscopy and machine learning (ML) for data analysis, PA technology offers real-time, minimally invasive, and highly accurate detection of colorectal tumors. This approach supports tumor classification, therapy monitoring, and detecting features like hypoxia and tumor-associated bacteria. Recent studies integrating machine learning with PA imaging have demonstrated high diagnostic accuracy, achieving area under the curve (AUC) values up to 0.96 and classification accuracies exceeding 89%, highlighting its potential for precise, noninvasive colorectal cancer detection. Continued advancements in nanoparticle design, molecular targeting, and ML analytics position PA as a key tool for personalized colorectal cancer management.

Moreira GC, do Carmo Ribeiro CS, Verner FS, Lemos CAA

pubmed logopapersJul 1 2025
This systematic review aimed to assess the performance of artificial intelligence (AI) in the evaluation of maxillary sinus mucosal alterations in imaging examinations compared to human analysis. Studies that presented radiographic images for the diagnosis of paranasal sinus diseases, as well as control groups for AI, were included. Articles that performed tests on animals, presented other conditions, surgical methods, did not present data on the diagnosis of MS or on the outcomes of interest (area under the curve, sensitivity, specificity, and accuracy), compared the outcome only among different AIs were excluded. Searches were conducted in 5 electronic databases and a gray literature. The risk of bias (RB) was assessed using the QUADAS-2 and the certainty of evidence by GRADE. Six studies were included. The type of study considered was retrospective observational; with serious RB, and a considerable heterogeneity in methodologies. The IA presents similar results to humans, however, imprecision was assessed as serious for the outcomes and the certainty of evidence was classified as very low according to the GRADE approach. Furthermore, a dose-response effect was determined, as specialists demonstrate greater mastery of the diagnosis of MS when compared to resident professionals or general clinicians. Considering the outcomes, the AI represents a complementary tool for assessing maxillary mucosal alterations, especially considering professionals with less experience. Finally, performance analysis and definition of comparison parameters should be encouraged considering future research perspectives. AI is a potential complementary tool for assessing maxillary sinus mucosal alterations, however studies are still lacking methodological standardization.

Zhang J, Huang W, Li Y, Zhang X, Chen Y, Chen S, Ming Q, Jiang Q, Xv Y

pubmed logopapersJul 1 2025
To develop and validate a computed tomography (CT) radiomics-based interpretable machine learning (ML) model for predicting 5-year recurrence-free survival (RFS) in non-metastatic clear cell renal cell carcinoma (ccRCC). 559 patients with non-metastatic ccRCCs were retrospectively enrolled from eight independent institutes between March 2013 and January 2019, and were assigned to the primary set (n=271), external test set 1 (n=216), and external test set 2 (n=72). 1316 Radiomics features were extracted via "Pyradiomics." The least absolute shrinkage and selection operator algorithm was used for feature selection and Rad-Score construction. Patients were stratified into low and high 5-year recurrence risk groups based on Rad-Score, followed by Kaplan-Meier analyses. Five ML models integrating Rad-Score and clinicopathological risk factors were compared. Models' performances were evaluated via the discrimination, calibration, and decision curve analysis. The most robust ML model was interpreted using the SHapley Additive exPlanation (SHAP) method. 13 radiomic features were filtered to produce the Rad-Score, which predicted 5-year RFS with area under the receiver operating characteristic curve (AUCs) of 0.734-0.836. Kaplan-Meier analysis showed significant survival differences based on Rad-Score (all Log-Rank p values <0.05). The random forest model outperformed other models, obtaining AUCs of 0.826 [95% confidential interval (CI): 0.766-0.879] and 0.799 (95% CI: 0.670-0.899) in the external test set 1 and 2, respectively. The SHAP analysis suggested positive associations between contributing factors and 5-year RFS status in non-metastatic ccRCC. CT radiomics-based interpretable ML model can effectively predict 5-year RFS in non-metastatic ccRCC patients, distinguishing between low and high 5-year recurrence risks.

Huang Z, Wang L, Mei H, Liu J, Zeng H, Liu W, Yuan H, Wu K, Liu H

pubmed logopapersJul 1 2025
The classification of renal cell carcinoma (RCC) histological subtypes plays a crucial role in clinical diagnosis. However, traditional image normalization methods often struggle with discrepancies arising from differences in imaging parameters, scanning devices, and multi-center data, which can impact model robustness and generalizability. This study included 1628 patients with pathologically confirmed RCC who underwent nephrectomy across eight cohorts. These were divided into a training set, a validation set, external test dataset 1, and external test dataset 2. We proposed an "Aortic Enhancement Normalization" (AEN) method based on the lesion-to-aorta enhancement ratio and developed an automated lesion segmentation model along with a multi-scale CT feature extractor. Several machine learning algorithms, including Random Forest, LightGBM, CatBoost, and XGBoost, were used to build classification models and compare the performance of the AEN and traditional approaches for evaluating histological subtypes (clear cell renal cell carcinoma [ccRCC] vs. non-ccRCC). Additionally, we employed SHAP analysis to further enhance the transparency and interpretability of the model's decisions. The experimental results demonstrated that the AEN method outperformed the traditional normalization method across all four algorithms. Specifically, in the XGBoost model, the AEN method significantly improved performance in both internal and external validation sets, achieving AUROC values of 0.89, 0.81, and 0.80, highlighting its superior performance and strong generalizability. SHAP analysis revealed that multi-scale CT features played a critical role in the model's decision-making process. The proposed AEN method effectively reduces the impact of imaging parameter differences, significantly improving the robustness and generalizability of histological subtype (ccRCC vs. non-ccRCC) models. This approach provides new insights for multi-center data analysis and demonstrates promising clinical applicability.

Wang P, Cui J, Du H, Qian Z, Zhan H, Zhang H, Ye W, Meng W, Bai R

pubmed logopapersJul 1 2025
Accurate preoperative prediction of spread through air spaces (STAS) in primary lung adenocarcinoma (LUAD) is critical for optimizing surgical strategies and improving patient outcomes. To develop a machine learning (ML) based model to predict STAS using preoperative CT imaging features and clinicopathological data, while enhancing interpretability through shapley additive explanations (SHAP) analysis. This multicenter retrospective study included 1237 patients with pathologically confirmed primary LUAD from three hospitals. Patients from Center 1 (n=932) were divided into a training set (n=652) and an internal test set (n=280). Patients from Centers 2 (n=165) and 3 (n=140) formed external validation sets. CT imaging features and clinical variables were selected using Boruta and least absolute shrinkage and selection operator regression. Seven ML models were developed and evaluated using five-fold cross-validation. Performance was assessed using F1 score, recall, precision, specificity, sensitivity, and area under the receiver operating characteristic curve (AUC). The Extreme Gradient Boosting (XGB) model achieved AUCs of 0.973 (training set), 0.862 (internal test set), and 0.842/0.810 (external validation sets). SHAP analysis identified nodule type, carcinoembryonic antigen, maximum nodule diameter, and lobulated sign as key features for predicting STAS. Logistic regression analysis confirmed these as independent risk factors. The XGB model demonstrated high predictive accuracy and interpretability for STAS. By integrating widely available clinical and imaging features, this model offers a practical and effective tool for preoperative risk stratification, supporting personalized surgical planning in primary LUAD management.

Zhou Y, Lin G, Chen W, Chen Y, Shi C, Peng Z, Chen L, Cai S, Pan Y, Chen M, Lu C, Ji J, Chen S

pubmed logopapersJul 1 2025
To construct and validate an interpretable machine learning (ML) radiomics model derived from multiparametric magnetic resonance imaging (MRI) images to differentiate between luminal and non-luminal breast cancer (BC) subtypes. This study enrolled 1098 BC participants from four medical centers, categorized into a training cohort (n = 580) and validation cohorts 1-3 (n = 252, 89, and 177, respectively). Multiparametric MRI-based radiomics features, including T1-weighted imaging (T1WI), T2-weighted imaging (T2WI), diffusion-weighted imaging (DWI), apparent diffusion coefficient (ADC), and dynamic contrast-enhanced (DCE) imaging, were extracted. Five ML algorithms were applied to develop various radiomics models, from which the best performing model was identified. A ML-based combined model including optimal radiomics features and clinical predictors was constructed, with performance assessed through receiver operating characteristic (ROC) analysis. The Shapley additive explanation (SHAP) method was utilized to assess model interpretability. Tumor size and MR-reported lymph node status were chosen as significant clinical variables. Thirteen radiomics features were identified from multiparametric MRI images. The extreme gradient boosting (XGBoost) radiomics model performed the best, achieving area under the curves (AUCs) of 0.941, 0.903, 0.862, and 0.894 across training and validation cohorts 1-3, respectively. The XGBoost combined model showed favorable discriminative power, with AUCs of 0.956, 0.912, 0.894, and 0.906 in training and validation cohorts 1-3, respectively. The SHAP visualization facilitated global interpretation, identifying "ADC_wavelet-HLH_glszm_ZoneEntropy" and "DCE_wavelet-HLL_gldm_DependenceVariance" as the most significant features for the model's predictions. The XGBoost combined model derived from multiparametric MRI may proficiently differentiate between luminal and non-luminal BC and aid in treatment decision-making. An interpretable machine learning radiomics model can preoperatively predict luminal and non-luminal subtypes in breast cancer, thereby aiding therapeutic decision-making.

Li J, Zhou C, Qu X, Du L, Yuan Q, Han Q, Xian J

pubmed logopapersJul 1 2025
To develop and validate a diagnostic framework integrating intralesional (ILN) and perilesional (PLN) radiomics derived from multiparametric MRI (mpMRI) for distinguishing IgG4-related ophthalmic disease (IgG4-ROD) from orbital mucosa-associated lymphoid tissue (MALT) lymphoma. This multicenter retrospective study analyzed 214 histopathologically confirmed cases (68 IgG4-ROD, 146 MALT lymphoma) from two institutions (2019-2024). A LASSO-SVM classifier was optimized through comparative evaluation of seven machine learning models, incorporating fused radiomic features (1,197 features) from ILN/PLN regions. Diagnostic performance was benchmarked against two subspecialty radiologists (10-20 years' experience) using receiver operating characteristics - area under the curve (AUC), precision-recall AUC (PR-AUC), and decision curve analysis (DCA), adhering to CLEAR/METRICS guidelines. The fusion model (FR_RAD) achieved state-of-the-art performance, with an AUC of 0.927 (95% CI 0.902-0.958) and a PR-AUC of 0.901 (95% CI 0.862-0.940) in the training set, and an AUC of 0.907 (95% CI 0.857-0.965) and a PR-AUC of 0.872 (95% CI 0.820-0.924) on external testing. In contrast, subspecialty radiologists achieved lower AUCs of 0.671-0.740 (95% CI 0.630-0.780) and PR-AUCs of 0.553-0.632 (95% CI 0.521-0.664) (all p < 0.001). FR_RAD also outperformed radiologists in accuracy (88.6% vs. 66.2% and 71.3%; p < 0.01). DCA demonstrated a net benefit of 0.18 at a high-risk threshold of 30%, equivalent to avoiding 18 unnecessary biopsies per 100 cases. The fusion model integrating multi-regional radiomics from mpMRI achieves precise differentiation between IgG4-ROD and orbital MALT lymphoma, outperforming subspecialty radiologists. This approach highlights the transformative potential of spatial radiomics analysis in resolving diagnostic uncertainties and reducing reliance on invasive procedures for orbital lesion characterization.

Ulubaba HE, Atik İ, Çiftçi R, Eken Ö, Aldhahi MI

pubmed logopapersJul 1 2025
Accurate gender estimation plays a crucial role in forensic identification, especially in mass disasters or cases involving fragmented or decomposed remains where traditional skeletal landmarks are unavailable. This study aimed to develop a deep learning-based model for gender classification using hand radiographs, offering a rapid and objective alternative to conventional methods. We analyzed 470 left-hand X-ray images from adults aged 18 to 65 years using four convolutional neural network (CNN) architectures: ResNet-18, ResNet-50, InceptionV3, and EfficientNet-B0. Following image preprocessing and data augmentation, models were trained and validated using standard classification metrics: accuracy, precision, recall, and F1 score. Data augmentation included random rotation, horizontal flipping, and brightness adjustments to enhance model generalization. Among the tested models, ResNet-50 achieved the highest classification accuracy (93.2%) with precision of 92.4%, recall of 93.3%, and F1 score of 92.5%. While other models demonstrated acceptable performance, ResNet-50 consistently outperformed them across all metrics. These findings suggest CNNs can reliably extract sexually dimorphic features from hand radiographs. Deep learning approaches, particularly ResNet-50, provide a robust, scalable, and efficient solution for gender prediction from hand X-ray images. This method may serve as a valuable tool in forensic scenarios where speed and reliability are critical. Future research should validate these findings across diverse populations and incorporate explainable AI techniques to enhance interpretability.
Page 535 of 7577568 results
Show
per page

Ready to Sharpen Your Edge?

Subscribe to join 7,600+ peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.