Sort by:
Page 77 of 2372364 results

An Interpretable Multi-Plane Fusion Framework With Kolmogorov-Arnold Network Guided Attention Enhancement for Alzheimer's Disease Diagnosis

Xiaoxiao Yang, Meiliang Liu, Yunfang Xu, Zijin Li, Zhengye Si, Xinyue Yang, Zhiwen Zhao

arxiv logopreprintAug 8 2025
Alzheimer's disease (AD) is a progressive neurodegenerative disorder that severely impairs cognitive function and quality of life. Timely intervention in AD relies heavily on early and precise diagnosis, which remains challenging due to the complex and subtle structural changes in the brain. Most existing deep learning methods focus only on a single plane of structural magnetic resonance imaging (sMRI) and struggle to accurately capture the complex and nonlinear relationships among pathological regions of the brain, thus limiting their ability to precisely identify atrophic features. To overcome these limitations, we propose an innovative framework, MPF-KANSC, which integrates multi-plane fusion (MPF) for combining features from the coronal, sagittal, and axial planes, and a Kolmogorov-Arnold Network-guided spatial-channel attention mechanism (KANSC) to more effectively learn and represent sMRI atrophy features. Specifically, the proposed model enables parallel feature extraction from multiple anatomical planes, thus capturing more comprehensive structural information. The KANSC attention mechanism further leverages a more flexible and accurate nonlinear function approximation technique, facilitating precise identification and localization of disease-related abnormalities. Experiments on the ADNI dataset confirm that the proposed MPF-KANSC achieves superior performance in AD diagnosis. Moreover, our findings provide new evidence of right-lateralized asymmetry in subcortical structural changes during AD progression, highlighting the model's promising interpretability.

MRI-based radiomics for preoperative T-staging of rectal cancer: a retrospective analysis.

Patanè V, Atripaldi U, Sansone M, Marinelli L, Del Tufo S, Arrichiello G, Ciardiello D, Selvaggi F, Martinelli E, Reginelli A

pubmed logopapersAug 8 2025
Preoperative T-staging in rectal cancer is essential for treatment planning, yet conventional MRI shows limited accuracy (~ 60-78). Our study investigates whether radiomic analysis of high-resolution T2-weighted MRI can non-invasively improve staging accuracy through a retrospective evaluation in a real-world surgical cohort. This single-center retrospective study included 200 patients (January 2024-April 2025) with pathologically confirmed rectal cancer, all undergoing preoperative high-resolution T2-weighted MRI within one week prior to curative surgery and no neoadjuvant therapy. Manual segmentation was performed using ITK‑SNAP, followed by extraction of 107 radiomic features via PyRadiomics. Feature selection employed mRMR and LASSO logistic regression, culminating in a Rad-score predictive model. Statistical performance was evaluated using ROC curves (AUC), accuracy, sensitivity, specificity, and Delong's test. Among 200 patients, 95 were pathologically staged as T2 and 105 as T3-T4 (55 T3, 50 T4). After preprocessing, 26 radiomic features were retained; key features including ngtdm_contrast and ngtdm_coarseness showed AUC values > 0.70. The LASSO-based model achieved an AUC of 0.82 (95% CI: 0.75-0.89), with overall accuracy of 81%, sensitivity of 78%, and specificity of 84%. Radiomic analysis of standard preoperative T2-weighted MRI provides a reliable, non-invasive method to predict rectal cancer T-stage. This approach has the potential to enhance staging accuracy and inform personalized surgical planning. Prospective multicenter validation is required for broader clinical implementation.

Advanced dynamic ensemble framework with explainability driven insights for precision brain tumor classification across datasets.

Singh R, Gupta S, Ibrahim AO, Gabralla LA, Bharany S, Rehman AU, Hussen S

pubmed logopapersAug 8 2025
Accurate detection of brain tumors remains a significant challenge due to the diversity of tumor types along with human interventions during diagnostic process. This study proposes a novel ensemble deep learning system for accurate brain tumor classification using MRI data. The proposed system integrates fine-tuned Convolutional Neural Network (CNN), ResNet-50 and EfficientNet-B5 to create a dynamic ensemble framework that addresses existing challenges. An adaptive dynamic weight distribution strategy is employed during training to optimize the contribution of each networks in the framework. To address class imbalance and improve model generalization, a customized weighted cross-entropy loss function is incorporated. The model obtains improved interpretability through explainabile artificial intelligence (XAI) techniques, including Grad-CAM, SHAP, SmoothGrad, and LIME, providing deeper insights into prediction rationale. The proposed system achieves a classification accuracy of 99.4% on the test set, 99.48% on the validation set, and 99.31% in cross-dataset validation. Furthermore, entropy-based uncertainty analysis quantifies prediction confidence, yielding an average entropy of 0.3093 and effectively identifying uncertain predictions to mitigate diagnostic errors. Overall, the proposed framework demonstrates high accuracy, robustness, and interpretability, highlighting its potential for integration into automated brain tumor diagnosis systems.

A Co-Plane Machine Learning Model Based on Ultrasound Radiomics for the Evaluation of Diabetic Peripheral Neuropathy.

Jiang Y, Peng R, Liu X, Xu M, Shen H, Yu Z, Jiang Z

pubmed logopapersAug 8 2025
Detection of diabetic peripheral neuropathy (DPN) is critical for preventing severe complications. Machine learning (ML) and radiomics offer promising approaches for the diagnosis of DPN; however, their application in ultrasound-based detection of DPN remains limited. Moreover, there is no consensus on whether longitudinal or transverse ultrasound planes provide more robust radiomic features for nerve evaluation. This study aimed to analyze and compare radiomic features from different ultrasound planes of the tibial nerve and to develop a co-plane fusion ML model to enhance the diagnostic accuracy of DPN. In our study, a total of 516 feet from 262 diabetics across two institutions was analyzed and stratified into a training cohort (n = 309), an internal testing cohort (n = 133), and an external testing cohort (n = 74). A total of 1316 radiomic features were extracted from both transverse and longitudinal planes of the tibial nerve. After feature selection, six ML algorithms were utilized to construct radiomics models based on transverse, longitudinal, and combined planes. The performance of these models was assessed using receiver operating characteristic curves, calibration curves, and decision curve analysis (DCA). Shapley Additive exPlanations (SHAP) were employed to elucidate the key features and their contributions to predictions within the optimal model. The co-plane Support Vector Machine (SVM) model exhibited superior performance, achieving AUC values of 0.90 (95% CI: 0.86-0.93), 0.88 (95% CI: 0.84-0.91), and 0.70 (95% CI: 0.64-0.76) in the training, internal testing, and external testing cohorts, respectively. These results significantly exceeded those of the single-plane models, as determined by the DeLong test (P < 0.05). Calibration curves and DCA curve indicated a good model fit and suggested potential clinical utility. Furthermore, SHAP were employed to explain the model. The co-plane SVM model, which integrates transverse and longitudinal radiomic features of the tibial nerve, demonstrated optimal performance in DPN prediction, thereby significantly enhancing the efficacy of DPN diagnosis. This model may serve as a robust tool for noninvasive assessment of DPN, highlighting its promising applicability in clinical settings.

BM3D filtering with Ensemble Hilbert-Huang Transform and spiking neural networks for cardiomegaly detection in chest radiographs.

Patel RK

pubmed logopapersAug 8 2025
Cardiomyopathy is a life-threatening condition associated with heart failure, arrhythmias, thromboembolism, and sudden cardiac death, posing a significant contribution to worldwide morbidity and mortality. Cardiomegaly, which is usually the initial radiologic sign, may reflect the progression of an underlying heart disease or an underlying undiagnosed cardiac condition. Chest radiography is the most frequently used imaging method for detecting heart enlargement. Prompt and accurate diagnosis is essential for prompt intervention and appropriate treatment planning to prevent disease progression and improve patient outcomes. The current work provides a new methodology for automated cardiomegaly diagnosis using X-ray images through the fusion of Block-Matching and 3D Filtering (BM3D) within the Ensemble Hilbert-Huang Transform (EHHT), convolutional neural networks like Pretrained VGG16, ResNet50, InceptionV3, DenseNet169, and Spiking Neural Networks (SNN), and Classifiers. BM3D is first used for image edge retention and noise reduction, and then EHHT is applied to obtain informative features from X-ray images. The features that have been extracted are then processed using an SNN that simulates neural processes at a biological level and offers a biologically possible classification solution. Gradient-weighted Class Activation Mapping (GradCAM) emphasized important areas that affected model predictions. The SNN performed the best among all the models tested, with 97.6 % accuracy, 96.3 % sensitivity, and 98.2 % specificity. These findings show the SNN's high potential for facilitating accurate and efficient cardiomyopathy diagnosis, leading to enhanced clinical decision-making and patient outcomes.

Ensemble deep learning model for early diagnosis and classification of Alzheimer's disease using MRI scans.

Robinson Jeyapaul S, Kombaiya S, Jeya Kumar AK, Stanley VJ

pubmed logopapersAug 8 2025
BackgroundAlzheimer's disease (AD) is an irreversible neurodegenerative disorder characterized by progressive cognitive and memory decline. Accurate prediction of high-risk individuals enables early detection and better patient care.ObjectiveThis study aims to enhance MRI-based AD classification through advanced image preprocessing, optimal feature selection, and ensemble deep learning techniques.MethodsThe study employs advanced image preprocessing techniques such as normalization, affine transformation, and denoising to improve MRI quality. Brain structure segmentation is performed using the adaptive DeepLabV3 + approach for precise AD diagnosis. A novel optimal feature selection framework, H-IBMFO, integrates the Improved Beluga Whale Optimizer and Manta Foraging Optimization. An ensemble deep learning model combining MobileNet V2, DarkNet, and ResNet is used for classification. MATLAB is utilized for implementation.ResultsThe proposed system achieves 98.7% accuracy, with 98% precision, 98% sensitivity, 99% specificity, and 98% F-measure, demonstrating superior classification performance with minimal false positives and negatives.ConclusionsThe study establishes an efficient framework for AD classification, significantly improving early detection through optimized feature selection and deep learning. The high accuracy and reliability of the system validate its effectiveness in diagnosing AD stages.

Deep Learning Chest X-Ray Age, Epigenetic Aging Clocks and Associations with Age-Related Subclinical Disease in the Project Baseline Health Study.

Chandra J, Short S, Rodriguez F, Maron DJ, Pagidipati N, Hernandez AF, Mahaffey KW, Shah SH, Kiel DP, Lu MT, Raghu VK

pubmed logopapersAug 8 2025
Chronological age is an important component of medical risk scores and decision-making. However, there is considerable variability in how individuals age. We recently published an open-source deep learning model to assess biological age from chest radiographs (CXR-Age), which predicts all-cause and cardiovascular mortality better than chronological age. Here, we compare CXR-Age to two established epigenetic aging clocks (First generation-Horvath Age; Second generation-DNAm PhenoAge) to test which is more strongly associated with cardiopulmonary disease and frailty. Our cohort consisted of 2,097 participants from the Project Baseline Health Study, a prospective cohort study of individuals from four US sites. We compared the association between the different aging clocks and measures of cardiopulmonary disease, frailty, and protein abundance collected at the participant's first annual visit using linear regression models adjusted for common confounders. We found that CXR-Age was associated with coronary calcium, cardiovascular risk factors, worsening pulmonary function, increased frailty, and abundance in plasma of two proteins implicated in neuroinflammation and aging. Associations with DNAm PhenoAge were weaker for pulmonary function and all metrics in middle-age adults. We identified thirteen proteins that were associated with DNAm PhenoAge, one (CDH13) of which was also associated with CXR-Age. No associations were found with Horvath Age. These results suggest that CXR-Age may serve as a better metric of cardiopulmonary aging than epigenetic aging clocks, especially in midlife adults.

Transformer-Based Explainable Deep Learning for Breast Cancer Detection in Mammography: The MammoFormer Framework

Ojonugwa Oluwafemi Ejiga Peter, Daniel Emakporuena, Bamidele Dayo Tunde, Maryam Abdulkarim, Abdullahi Bn Umar

arxiv logopreprintAug 8 2025
Breast cancer detection through mammography interpretation remains difficult because of the minimal nature of abnormalities that experts need to identify alongside the variable interpretations between readers. The potential of CNNs for medical image analysis faces two limitations: they fail to process both local information and wide contextual data adequately, and do not provide explainable AI (XAI) operations that doctors need to accept them in clinics. The researcher developed the MammoFormer framework, which unites transformer-based architecture with multi-feature enhancement components and XAI functionalities within one framework. Seven different architectures consisting of CNNs, Vision Transformer, Swin Transformer, and ConvNext were tested alongside four enhancement techniques, including original images, negative transformation, adaptive histogram equalization, and histogram of oriented gradients. The MammoFormer framework addresses critical clinical adoption barriers of AI mammography systems through: (1) systematic optimization of transformer architectures via architecture-specific feature enhancement, achieving up to 13% performance improvement, (2) comprehensive explainable AI integration providing multi-perspective diagnostic interpretability, and (3) a clinically deployable ensemble system combining CNN reliability with transformer global context modeling. The combination of transformer models with suitable feature enhancements enables them to achieve equal or better results than CNN approaches. ViT achieves 98.3% accuracy alongside AHE while Swin Transformer gains a 13.0% advantage through HOG enhancements

impuTMAE: Multi-modal Transformer with Masked Pre-training for Missing Modalities Imputation in Cancer Survival Prediction

Maria Boyko, Aleksandra Beliaeva, Dmitriy Kornilov, Alexander Bernstein, Maxim Sharaev

arxiv logopreprintAug 8 2025
The use of diverse modalities, such as omics, medical images, and clinical data can not only improve the performance of prognostic models but also deepen an understanding of disease mechanisms and facilitate the development of novel treatment approaches. However, medical data are complex, often incomplete, and contains missing modalities, making effective handling its crucial for training multimodal models. We introduce impuTMAE, a novel transformer-based end-to-end approach with an efficient multimodal pre-training strategy. It learns inter- and intra-modal interactions while simultaneously imputing missing modalities by reconstructing masked patches. Our model is pre-trained on heterogeneous, incomplete data and fine-tuned for glioma survival prediction using TCGA-GBM/LGG and BraTS datasets, integrating five modalities: genetic (DNAm, RNA-seq), imaging (MRI, WSI), and clinical data. By addressing missing data during pre-training and enabling efficient resource utilization, impuTMAE surpasses prior multimodal approaches, achieving state-of-the-art performance in glioma patient survival prediction. Our code is available at https://github.com/maryjis/mtcp

Machine learning diagnostic model for amyotrophic lateral sclerosis analysis using MRI-derived features.

Gil Chong P, Mazon M, Cerdá-Alberich L, Beser Robles M, Carot JM, Vázquez-Costa JF, Martí-Bonmatí L

pubmed logopapersAug 8 2025
Amyotrophic Lateral Sclerosis is a devastating motor neuron disease characterized by its diagnostic difficulty. Currently, no reliable biomarkers exist in the diagnosis process. In this scenario, our purpose is the application of machine learning algorithms to imaging MRI-derived variables for the development of diagnostic models that facilitate and shorten the process. A dataset of 211 patients (114 ALS, 45 mimic, 22 genetic carriers and 30 control) with MRI-derived features of volumetry, cortical thickness and local iron (via T2* mapping, and visual assessment of susceptibility imaging). A binary classification task approach has been taken to classify patients with and without ALS. A sequential modeling methodology, understood from an iterative improvement perspective, has been followed, analyzing each group's performance separately to adequately improve modelling. Feature filtering techniques, dimensionality reduction techniques (PCA, kernel PCA), oversampling techniques (SMOTE, ADASYN) and classification techniques (logistic regression, LASSO, Ridge, ElasticNet, Support Vector Classifier, K-neighbors, random forest) were included. Three subsets of available data have been used for each proposed architecture: a subset containing automatic retrieval MRI-derived data, a subset containing the variables from the visual analysis of the susceptibility imaging and a subset containing all features. The best results have been attained with all the available data through a voting classifier composed of five different classifiers: accuracy = 0.896, AUC = 0.929, sensitivity = 0.886, specificity = 0.929. These results confirm the potential of ML techniques applied to imaging variables of volumetry, cortical thickness, and local iron for the development of diagnostic model as a clinical tool for decision-making support.
Page 77 of 2372364 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.