Sort by:
Page 40 of 2352345 results

Interpretable Semi-federated Learning for Multimodal Cardiac Imaging and Risk Stratification: A Privacy-Preserving Framework.

Liu X, Li S, Zhu Q, Xu S, Jin Q

pubmed logopapersSep 5 2025
The growing heterogeneity of cardiac patient data from hospitals and wearables necessitates predictive models that are tailored, comprehensible, and safeguard privacy. This study introduces PerFed-Cardio, a lightweight and interpretable semi-federated learning (Semi-FL) system for real-time cardiovascular risk stratification utilizing multimodal data, including cardiac imaging, physiological signals, and electronic health records (EHR). In contrast to conventional federated learning, where all clients engage uniformly, our methodology employs a personalized Semi-FL approach that enables high-capacity nodes (e.g., hospitals) to conduct comprehensive training, while edge devices (e.g., wearables) refine shared models via modality-specific subnetworks. Cardiac MRI and echocardiography pictures are analyzed via lightweight convolutional neural networks enhanced with local attention modules to highlight diagnostically significant areas. Physiological characteristics (e.g., ECG, activity) and EHR data are amalgamated through attention-based fusion layers. Model transparency is attained using Local Interpretable Model-agnostic Explanations (LIME) and Grad-CAM, which offer spatial and feature-level elucidations for each prediction. Assessments on authentic multimodal datasets from 123 patients across five simulated institutions indicate that PerFed-Cardio attains an AUC-ROC of 0.972 with an inference latency of 130 ms. The customized model calibration and targeted training diminish communication load by 28%, while maintaining an F1-score over 92% in noisy situations. These findings underscore PerFed-Cardio as a privacy-conscious, adaptive, and interpretable system for scalable cardiac risk assessment.

Reperfusion injury in STEMI: a double-edged sword.

Thomas KS, Puthooran DM, Edpuganti S, Reddem AL, Jose A, Akula SSM

pubmed logopapersSep 5 2025
ST-elevation myocardial infarction (STEMI) is a major cardiac event that requires rapid reperfusion therapy. The same reperfusion mechanism that minimizes infarct size and mortality may paradoxically exacerbate further cardiac damage-a condition known as reperfusion injury. Oxidative stress, calcium excess, mitochondrial malfunction, and programmed cell death mechanisms make myocardial dysfunction worse. Even with the best revascularization techniques, reperfusion damage still jeopardizes the long-term prognosis and myocardial healing. A thorough narrative review was carried out using some of the most well-known scientific databases, including ScienceDirect, PubMed, and Google Scholar. With an emphasis on pathophysiological causes, clinical manifestations, innovative biomarkers, imaging modalities, artificial intelligence applications, and developing treatment methods related to reperfusion injury, peer-reviewed publications published between 2015 and 2025 were highlighted. The review focuses on the molecular processes that underlie cardiac reperfusion injury, such as reactive oxygen species, calcium dysregulation, opening of the mitochondrial permeability transition pore, and several types of programmed cell death. Clinical syndromes such as myocardial stunning, coronary no-reflow, and intramyocardial hemorrhage are thoroughly studied-all of which lead to negative consequences like heart failure and left ventricular dysfunction. Cardiac magnetic resonance imaging along with coronary angiography and significant biomarkers like N-terminal proBNP and soluble ST2 aid in risk stratification and prognosis. In addition to mechanical techniques like ischemia postconditioning and remote ischemic conditioning, pharmacological treatments are also examined. Despite promising research findings, the majority of therapies have not yet proven consistently effective in extensive clinical studies. Consideration of sex-specific risk factors, medicines that target the mitochondria, tailored therapies, and the use of artificial intelligence for risk assessment and early diagnosis are some potential future avenues. Reperfusion damage continues to be a significant obstacle to the best possible recovery after STEMI, even with improvements in revascularization. The management of STEMI still relies heavily on early reperfusion, although adjuvant medicines that target reperfusion injury specifically are desperately needed. Molecular-targeted approaches, AI-driven risk assessment, and precision medicine advancements have the potential to reduce cardiac damage and enhance long-term outcomes for patients with STEMI.

Prediction of bronchopulmonary dysplasia using machine learning from chest X-rays of premature infants in the neonatal intensive care unit.

Ozcelik G, Erol S, Korkut S, Kose Cetinkaya A, Ozcelik H

pubmed logopapersSep 5 2025
Bronchopulmonary dysplasia (BPD) is a significant morbidity in premature infants. This study aimed to assess the accuracy of the model's predictions in comparison to clinical outcomes. Medical records of premature infants born ≤ 28 weeks and < 1250 g between January 1, 2020, and December 31, 2021, in the neonatal intensive care unit were obtained. In this retrospective model development and validation study, an artificial intelligence model was developed using DenseNet121 deep learning architecture. The data set and test set consisted of chest radiographs obtained on postnatal day 1 as well as during the 2nd, 3rd, and 4th weeks. The model predicted the likelihood of developing no BPD, or mild, moderate, or severe BPD. The accuracy of the artificial intelligence model was tested based on the clinical outcomes of patients. This study included 122 premature infants with a birth weight of 990 g (range: 840-1120 g). Of these, 33 (27%) patients did not develop BPD, 24 (19.7%) had mild BPD, 28 (23%) had moderate BPD, and 37 (30%) had severe BPD. A total of 395 chest radiographs from these patients were used to develop an artificial intelligence (AI) model for predicting BPD. Area under the curve values, representing the accuracy of predicting severe, moderate, mild, and no BPD, were as follows: 0.79, 0.75, 0.82, and 0.82 for day 1 radiographs; 0.88, 0.82, 0.74, and 0.94 for week 2 radiographs; 0.87, 0.83, 0.88, and 0.96 for week 3 radiographs; and 0.90, 0.82, 0.86, and 0.97 for week 4 radiographs. The artificial intelligence model successfully identified BPD on chest radiographs and classified its severity. The accuracy of the model can be improved using larger control and external validation datasets.

Preoperative Assessment of Extraprostatic Extension in Prostate Cancer Using an Interpretable Tabular Prior-Data Fitted Network-Based Radiomics Model From MRI.

Liu BC, Ding XH, Xu HH, Bai X, Zhang XJ, Cui MQ, Guo AT, Mu XT, Xie LZ, Kang HH, Zhou SP, Zhao J, Wang BJ, Wang HY

pubmed logopapersSep 5 2025
MRI assessment for extraprostatic extension (EPE) of prostate cancer (PCa) is challenging due to limited accuracy and interobserver agreement. To develop an interpretable Tabular Prior-data Fitted Network (TabPFN)-based radiomics model to evaluate EPE using MRI and explore its integration with radiologists' assessments. Retrospective. Five hundred and thirteen consecutive patients who underwent radical prostatectomy. Four hundred and eleven patients from center 1 (mean age 67 ± 7 years) formed training (287 patients) and internal test (124 patients) sets, and 102 patients from center 2 (mean age 66 ± 6 years) were assigned as an external test set. Three Tesla, fast spin echo T2-weighted imaging (T2WI) and diffusion-weighted imaging using single-shot echo planar imaging. Radiomics features were extracted from T2WI and apparent diffusion coefficient maps, and the TabRadiomics model was developed using TabPFN. Three machine learning models served as baseline comparisons: support vector machine, random forest, and categorical boosting. Two radiologists (with > 1500 and > 500 prostate MRI interpretations, respectively) independently evaluated EPE grade on MRI. Artificial intelligence (AI)-modified EPE grading algorithms incorporating the TabRadiomics model with radiologists' interpretations of curvilinear contact length and frank EPE were simulated. Receiver operating characteristic curve (AUC), Delong test, and McNemar test. p < 0.05 was considered significant. The TabRadiomics model performed comparably to machine learning models in both internal and external tests, with AUCs of 0.806 (95% CI, 0.727-0.884) and 0.842 (95% CI, 0.770-0.912), respectively. AI-modified algorithms showed significantly higher accuracies compared with the less experienced reader in internal testing, with up to 34.7% of interpretations requiring no radiologist input. However, no difference was observed in both readers in the external test set. The TabRadiomics model demonstrated high performance in EPE assessment and may improve clinical assessment in PCa. 4. Stage 2.

Interpretable Deep Transfer Learning for Breast Ultrasound Cancer Detection: A Multi-Dataset Study

Mohammad Abbadi, Yassine Himeur, Shadi Atalla, Wathiq Mansoor

arxiv logopreprintSep 5 2025
Breast cancer remains a leading cause of cancer-related mortality among women worldwide. Ultrasound imaging, widely used due to its safety and cost-effectiveness, plays a key role in early detection, especially in patients with dense breast tissue. This paper presents a comprehensive study on the application of machine learning and deep learning techniques for breast cancer classification using ultrasound images. Using datasets such as BUSI, BUS-BRA, and BrEaST-Lesions USG, we evaluate classical machine learning models (SVM, KNN) and deep convolutional neural networks (ResNet-18, EfficientNet-B0, GoogLeNet). Experimental results show that ResNet-18 achieves the highest accuracy (99.7%) and perfect sensitivity for malignant lesions. Classical ML models, though outperformed by CNNs, achieve competitive performance when enhanced with deep feature extraction. Grad-CAM visualizations further improve model transparency by highlighting diagnostically relevant image regions. These findings support the integration of AI-based diagnostic tools into clinical workflows and demonstrate the feasibility of deploying high-performing, interpretable systems for ultrasound-based breast cancer detection.

AI-driven and Traditional Radiomic Model for Predicting Muscle Invasion in Bladder Cancer via Multi-parametric Imaging: A Systematic Review and Meta-analysis.

Wang Z, Shi H, Wang Q, Huang Y, Feng M, Yu L, Dong B, Li J, Deng X, Fu S, Zhang G, Wang H

pubmed logopapersSep 5 2025
This study systematically evaluates the diagnostic performance of artificial intelligence (AI)-driven and conventional radiomics models in detecting muscle-invasive bladder cancer (MIBC) through meta-analytical approaches. Furthermore, it investigates their potential synergistic value with the Vesical Imaging-Reporting and Data System (VI-RADS) and assesses clinical translation prospects. This study adhered to the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines. We conducted a comprehensive systematic search of PubMed, Web of Science, Embase, and Cochrane Library databases up to May 13, 2025, and manually screened the references of included studies. The quality and risk of bias of the selected studies were assessed using the Quality Assessment of Diagnostic Accuracy Studies 2 (QUADAS-2) and Radiomics Quality Score (RQS) tools. We pooled the area under the curve (AUC), sensitivity, specificity, positive likelihood ratio (PLR), negative likelihood ratio (NLR), diagnostic odds ratio (DOR), and their 95% confidence intervals (95% CI). Additionally, meta-regression and subgroup analyses were performed to identify potential sources of heterogeneity. This meta-analysis incorporated 43 studies comprising 9624 patients. The majority of included studies demonstrated low risk of bias, with a mean RQS of 18.89. Pooled analysis yielded an AUC of 0.92 (95% CI: 0.89-0.94). The aggregate sensitivity and specificity were both 0.86 (95% CI: 0.84-0.87), with heterogeneity indices of I² = 43.58 and I² = 72.76, respectively. The PLR was 5.97 (95% CI: 5.28-6.75, I² = 64.04), while the NLR was 0.17 (95% CI: 0.15-0.19, I² = 37.68). The DOR reached 35.57 (95% CI: 29.76-42.51, I² = 99.92). Notably, all included studies exhibited significant heterogeneity (P < 0.1). Meta-regression and subgroup analyses identified several significant sources of heterogeneity, including: study center type (single-center vs. multi-center), sample size (<100 vs. ≥100 patients), dataset classification (training, validation, testing, or ungrouped), imaging modality (computed tomography [CT] vs. magnetic resonance imaging [MRI]), modeling algorithm (deep learning vs. machine learning vs. other), validation methodology (cross-validation vs. cohort validation), segmentation method (manual vs. [semi]automated), regional differences (China vs. other countries), and risk of bias (high vs. low vs. unclear). AI-driven and traditional radiomic models have exhibited robust diagnostic performance for MIBC. Nevertheless, substantial heterogeneity across studies necessitates validation through multinational, multicenter prospective cohort studies to establish external validity.

Enhancing Breast Density Assessment in Mammograms Through Artificial Intelligence.

da Rocha NC, Barbosa AMP, Schnr YO, Peres LDB, de Andrade LGM, de Magalhaes Rosa GJ, Pessoa EC, Corrente JE, de Arruda Silveira LV

pubmed logopapersSep 5 2025
Breast cancer is the leading cause of cancer-related deaths among women worldwide. Early detection through mammography significantly improves outcomes, with breast density acting as both a risk factor and a key interpretive feature. Although the Breast Imaging Reporting and Data System (BI-RADS) provides standardized density categories, assessments are often subjective and variable. While automated tools exist, most are proprietary and resource-intensive, limiting their use in underserved settings. There is a critical need for accessible, low-cost AI solutions that provide consistent breast density classification. This study aims to develop and evaluate an open-source, computer vision-based approach using deep learning techniques for objective breast density assessment in mammography images, with a focus on accessibility, consistency, and applicability in resource-limited healthcare environments. Our approach integrates a custom-designed convolutional neural network (CD-CNN) with an extreme learning machine (ELM) layer for image-based breast density classification. The retrospective dataset includes 10,371 full-field digital mammography images, previously categorized by radiologists into one of four BI-RADS breast density categories (A-D). The proposed model achieved a testing accuracy of 95.4%, with a specificity of 98.0% and a sensitivity of 92.5%. Agreement between the automated breast density classification and the specialists' consensus was strong, with a weighted kappa of 0.90 (95% CI: 0.82-0.98). On the external and independent mini-MIAS dataset, the model achieved an accuracy of 73.9%, a precision of 81.1%, a specificity of 87.3%, and a sensitivity of 75.1%, which is comparable to the performance reported in previous studies using this dataset. The proposed approach advances breast density assessment in mammograms, enhancing accuracy and consistency to support early breast cancer detection.

Deep Learning Based Multiomics Model for Risk Stratification of Postoperative Distant Metastasis in Colorectal Cancer.

Yao X, Han X, Huang D, Zheng Y, Deng S, Ning X, Yuan L, Ao W

pubmed logopapersSep 4 2025
To develop deep learning-based multiomics models for predicting postoperative distant metastasis (DM) and evaluating survival prognosis in colorectal cancer (CRC) patients. This retrospective study included 521 CRC patients who underwent curative surgery at two centers. Preoperative CT and postoperative hematoxylin-eosin (HE) stained slides were collected. A total of 381 patients from Center 1 were split (7:3) into training and internal validation sets; 140 patients from Center 2 formed the independent external validation set. Patients were grouped based on DM status during follow-up. Radiological and pathological models were constructed using independent imaging and pathological predictors. Deep features were extracted with a ResNet-101 backbone to build deep learning radiomics (DLRS) and deep learning pathomics (DLPS) models. Two integrated models were developed: Nomogram 1 (radiological + DLRS) and Nomogram 2 (pathological + DLPS). CT- reported T (cT) stage (OR=2.00, P=0.006) and CT-reported N (cN) stage (OR=1.63, P=0.023) were identified as independent radiologic predictors for building the radiological model; pN stage (OR=1.91, P=0.003) and perineural invasion (OR=2.07, P=0.030) were identified as pathological predictors for building the pathological model. DLRS and DLPS incorporated 28 and 30 deep features, respectively. In the training set, area under the curve (AUC) for radiological, pathological, DLRS, DLPS, Nomogram 1, and Nomogram 2 models were 0.657, 0.687, 0.931, 0.914, 0.938, and 0.930. DeLong's test showed DLRS, DLPS, and both nomograms significantly outperformed conventional models (P<.05). Kaplan-Meier analysis confirmed effective 3-year disease-free survival (DFS) stratification by the nomograms. Deep learning-based multiomics models provided high accuracy for postoperative DM prediction. Nomogram models enabled reliable DFS risk stratification in CRC patients.

Machine Learning-Based Prediction of Lymph Node Metastasis and Volume Using Preoperative Ultrasound Features in Papillary Thyroid Carcinoma.

Hu T, Cai Y, Zhou T, Zhang Y, Huang K, Huang X, Qian S, Wang Q, Luo D

pubmed logopapersSep 4 2025
A predictive model of cervical lymph node metastasis and metastasis volume was constructed based on a machine learning algorithm and ultrasound characteristics before surgery. A retrospective analysis was conducted on 573 cases of PTC patients who underwent surgery in our institution, from 2017 to 2022. Patient demographic and clinical characteristics were systematically collected. Feature selection was performed using univariate analysis, Logistic regression (LR) analysis. Statistically significant variables were identified using a threshold of p < 0.05. Predictive models for cervical lymph node metastasis and metastatic volume in papillary thyroid carcinoma were constructed using advanced machine learning algorithms: K-Nearest Neighbors (KNN), Gradient Boosting Machine (XGBoost), and Support Vector Machine (SVM). Model performance was rigorously assessed using validation cohort data, evaluating area under the Receiver Operating Characteristic (ROC) curve, sensitivity, specificity, and accuracy. In this retrospective study of 573 patients (320 had lymph node metastasis, 127 had small volume lymph node metastasis, and 193 had medium-volume lymph node metastasis). In the model predicting the neck lymph node metastasis, the Gradient Boosting method exhibited the best performance, with an area under the ROC curve of 0.784, sensitivity of 76.2%, specificity of 70.6%, and accuracy of 73.8%. In the model predicting the metastatic volume in neck lymph nodes for PTC, the Gradient Boosting method also demonstrated the best performance, with an area under the ROC curve of 0.779, sensitivity of 71.7%, specificity of 75.9%, and accuracy of 74.4%. Machine learning-based predictive models integrating preoperative ultrasound features demonstrate robust performance in stratifying neck lymph node metastasis risk for PTC patients. These models optimize surgical planning by guiding lymph node dissection extent and individualizing treatment strategies, potentially reducing unnecessary extensive surgeries. The integration of advanced computational techniques with clinical imaging provides a data-driven paradigm for preoperative risk assessment in thyroid oncology.

Interpretable Transformer Models for rs-fMRI Epilepsy Classification and Biomarker Discovery

Jeyabose Sundar, A., Boerwinkle, V. L., Robinson Vimala, B., Leggio, O., Kazemi, M.

medrxiv logopreprintSep 4 2025
BackgroundAutomated interpretation of resting-state fMRI (rs-fMRI) for epilepsy diagnosis remains a challenge. We developed a regularized transformer that models parcel-wise spatial patterns and long-range temporal dynamics to classify epilepsy and generate interpretable, network-level candidate biomarkers. MethodsInputs were Schaefer-200 parcel time series extracted after standardized preprocessing (fMRIPrep). The Regularized Transformer is an attention-based sequence model with learned positional encoding and multi-head self-attention, combined with fMRI-specific regularization (dropout, weight decay, gradient clipping) and augmentation to improve robustness on modest clinical cohorts. Training used stratified group 4-fold cross-validation on n=65 (30 epilepsy, 35 controls) with fMRI-specific augmentation (time-warping, adaptive noise, structured masking). We compared the transformer to seven baselines (MLP, 1D-CNN, LSTM, CNN-LSTM, GCN, GAT, Attention-Only). External validation used an independent set (10 UNC epilepsy cohort, 10 controls). Biomarker discovery combined gradient-based attributions with parcelwise statistics and connectivity contrasts. ResultsOn an illustrative best-performing fold, the transformer attained Accuracy 0.77, Sensitivity 0.83, Specificity 0.88, F1-Score 0.77, and AUC 0.76. Averaged cross-validation performance was lower but consistent with these findings. External testing yielded Accuracy 0.60, AUC 0.64, Specificity 0.80, Sensitivity 0.40. Attribution-guided analysis identified distributed, network-level candidate biomarkers concentrated in limbic, somatomotor, default-mode and salience systems. ConclusionsA regularized transformer on parcel-level rs-fMRI can achieve strong within-fold discrimination and produce interpretable candidate biomarkers. Results are encouraging but preliminary larger multi-site validation, stability testing and multiple-comparison control are required prior to clinical translation.
Page 40 of 2352345 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.