Sort by:
Page 36 of 1621612 results

Improved brain tumor classification through DenseNet121 based transfer learning.

Rasheed M, Jaffar MA, Akram A, Rashid J, Alshalali TAN, Irshad A, Sarwar N

pubmed logopapersAug 27 2025
Brain tumors have a big effect on a person's health by letting abnormal cells grow unchecked in the brain. This means that early and correct diagnosis is very important for effective treatment. Many of the current diagnostic methods are time-consuming, rely primarily on hand interpretation, and frequently yield unsatisfactory results. This work finds brain tumors in MRI data using DenseNet121 architecture with transfer learning. Model training made use of the Kaggle dataset. By preprocessing the stage, resizing the MRI pictures to minimize noise would help the model perform better. From one MRI scan, the proposed approach divides brain tissues into four groups: benign tumors, gliomas, meningiomas, and pituitary gland malignancies. The designed DenseNet121 architecture precisely classifies brain cancers. We assessed the models' performance in terms of accuracy, precision, recall, and F1-score. The suggested approach proved successful in the multi-class categorization of brain tumors since it attained an average accuracy improvement of 96.90%. Unlike previous diagnostic techniques, such as eye inspection and other machine learning models, the proposed DenseNet121-based approach is more accurate, takes less time to analyze, and requires less human input. Although the automated method ensures consistent and predictable results, human error sometimes causes more unpredictability in conventional methods. Based on MRI-based detection and transfer learning, this paper proposes an automated method for the classification of brain cancers. The method improves the precision and speed of brain tumor diagnosis, which benefits both MRI-based classification research and clinical use. The development of deep-learning models may even further improve tumor identification and prognosis prediction.

Shining light on degeneracies and uncertainties in quantifying both exchange and restriction with time-dependent diffusion MRI using Bayesian inference

Maëliss Jallais, Quentin Uhl, Tommaso Pavan, Malwina Molendowska, Derek K. Jones, Ileana Jelescu, Marco Palombo

arxiv logopreprintAug 26 2025
Diffusion MRI (dMRI) biophysical models hold promise for characterizing gray matter tissue microstructure. Yet, the reliability of estimated parameters remains largely under-studied, especially in models that incorporate water exchange. In this study, we investigate the accuracy, precision, and presence of degeneracy of two recently proposed gray matter models, NEXI and SANDIX, using two acquisition protocols from the literature, on both simulated and in vivo data. We employ $\mu$GUIDE, a Bayesian inference framework based on deep learning, to quantify model uncertainty and detect parameter degeneracies, enabling a more interpretable assessment of fitted parameters. Our results show that while some microstructural parameters, such as extra-cellular diffusivity and neurite signal fraction, are robustly estimated, others, such as exchange time and soma radius, are often associated with high uncertainty and estimation bias, especially under realistic noise conditions and reduced acquisition protocols. Comparisons with non-linear least squares fitting underscore the added value of uncertainty-aware methods, which allow for the identification and filtering of unreliable estimates. These findings emphasize the need to report uncertainty and consider model degeneracies when interpreting model-based estimates. Our study advocates for the integration of probabilistic fitting approaches in neuroscience imaging pipelines to improve reproducibility and biological interpretability.

Optimized deep learning for brain tumor detection: a hybrid approach with attention mechanisms and clinical explainability.

Aiya AJ, Wani N, Ramani M, Kumar A, Pant S, Kotecha K, Kulkarni A, Al-Danakh A

pubmed logopapersAug 26 2025
Brain tumor classification (BTC) from Magnetic Resonance Imaging (MRI) is a critical diagnosis task, which is highly important for treatment planning. In this study, we propose a hybrid deep learning (DL) model that integrates VGG16, an attention mechanism, and optimized hyperparameters to classify brain tumors into different categories as glioma, meningioma, pituitary tumor, and no tumor. The approach leverages state-of-the-art preprocessing techniques, transfer learning, and Gradient-weighted Class Activation Mapping (Grad-CAM) visualization on a dataset of 7023 MRI images to enhance both performance and interpretability. The proposed model achieves 99% test accuracy and impressive precision and recall figures and outperforms traditional approaches like Support Vector Machines (SVM) with Histogram of Oriented Gradients (HOG), Local Binary Pattern (LBP) and Principal Component Analysis (PCA) by a significant margin. Moreover, the model eliminates the need for manual labelling-a common challenge in this domain-by employing end-to-end learning, which allows the proposed model to derive meaningful features hence reducing human input. The integration of attention mechanisms further promote feature selection, in turn improving classification accuracy, while Grad-CAM visualizations show which regions of the image had the greatest impact on classification decisions, leading to increased transparency in clinical settings. Overall, the synergy of superior prediction, automatic feature extraction, and improved predictability confirms the model as an important application to neural networks approaches for brain tumor classification with valuable potential for enhancing medical imaging (MI) and clinical decision-making.

PRISM: A Framework Harnessing Unsupervised Visual Representations and Textual Prompts for Explainable MACE Survival Prediction from Cardiac Cine MRI

Haoyang Su, Jin-Yi Xiang, Shaohao Rui, Yifan Gao, Xingyu Chen, Tingxuan Yin, Xiaosong Wang, Lian-Ming Wu

arxiv logopreprintAug 26 2025
Accurate prediction of major adverse cardiac events (MACE) remains a central challenge in cardiovascular prognosis. We present PRISM (Prompt-guided Representation Integration for Survival Modeling), a self-supervised framework that integrates visual representations from non-contrast cardiac cine magnetic resonance imaging with structured electronic health records (EHRs) for survival analysis. PRISM extracts temporally synchronized imaging features through motion-aware multi-view distillation and modulates them using medically informed textual prompts to enable fine-grained risk prediction. Across four independent clinical cohorts, PRISM consistently surpasses classical survival prediction models and state-of-the-art (SOTA) deep learning baselines under internal and external validation. Further clinical findings demonstrate that the combined imaging and EHR representations derived from PRISM provide valuable insights into cardiac risk across diverse cohorts. Three distinct imaging signatures associated with elevated MACE risk are uncovered, including lateral wall dyssynchrony, inferior wall hypersensitivity, and anterior elevated focus during diastole. Prompt-guided attribution further identifies hypertension, diabetes, and smoking as dominant contributors among clinical and physiological EHR factors.

A Novel Model for Predicting Microsatellite Instability in Endometrial Cancer: Integrating Deep Learning-Pathomics and MRI-Based Radiomics.

Zhou L, Zheng L, Hong C, Hu Y, Wang Z, Guo X, Du Z, Feng Y, Mei J, Zhu Z, Zhao Z, Xu M, Lu C, Chen M, Ji J

pubmed logopapersAug 26 2025
To develop and validate a novel model based on multiparametric MRI (mpMRI) and whole slide images (WSIs) for predicting microsatellite instability (MSI) status in endometrial cancer (EC) patients. A total of 136 surgically confirmed EC patients were included in this retrospective study. Patients were randomly divided into a training set (96 patients) and a validation set (40 patients) in a 7:3 ratio. Deep learning with ResNet50 was used to extract deep-learning pathomics features, while Pyradiomics was applied to extract radiomics features specifically from sequences including T2-weighted imaging (T2WI), diffusion-weighted imaging (DWI), and late arterial phase (AP). we developed a deep learning pathoradiomics model (DLPRM) by multilayer perceptron (MLP) based on radiomics features and pathomics features. Furthermore, we validated the DLPRM comprehensively, and compared it with two single-scale signatures-including the area under the receiver operating characteristic (ROC) curve, accuracy, sensitivity, specificity, positive predictive value (PPV), negative predictive value (NPV) and F1-score. Finally, we employed shapley additive explanations (SHAP) to elucidate the mechanism of prediction model. After undergoing feature selection, a final set of nine radiomics features and 27 pathomics features were selected to construct the radiomics signature (RS) and the deep learning pathomics signature (DLPS). The DLPRM combining the RS and DLPS had favorable performance for the prediction of MSI status in the training set (AUC 0.960 [95% CI 0.936-0.984]), and in the validation set (AUC 0.917 [95% CI 0.824-1.000]). The AUCs of DLPS and RS ranged from 0.817 to 0.943 across the training and validation sets. The decision curve analysis indicated the DLPRM had relatively higher clinical net benefits. DLPRM can effectively predict MSI status in EC patients based on pretreatment pathoradiomics images with high accuracy and robustness, could provide a novel tool to assist clinicians in individualized management of EC.

MRExtrap: Longitudinal Aging of Brain MRIs using Linear Modeling in Latent Space

Jaivardhan Kapoor, Jakob H. Macke, Christian F. Baumgartner

arxiv logopreprintAug 26 2025
Simulating aging in 3D brain MRI scans can reveal disease progression patterns in neurological disorders such as Alzheimer's disease. Current deep learning-based generative models typically approach this problem by predicting future scans from a single observed scan. We investigate modeling brain aging via linear models in the latent space of convolutional autoencoders (MRExtrap). Our approach, MRExtrap, is based on our observation that autoencoders trained on brain MRIs create latent spaces where aging trajectories appear approximately linear. We train autoencoders on brain MRIs to create latent spaces, and investigate how these latent spaces allow predicting future MRIs through linear extrapolation based on age, using an estimated latent progression rate $\boldsymbol{\beta}$. For single-scan prediction, we propose using population-averaged and subject-specific priors on linear progression rates. We also demonstrate that predictions in the presence of additional scans can be flexibly updated using Bayesian posterior sampling, providing a mechanism for subject-specific refinement. On the ADNI dataset, MRExtrap predicts aging patterns accurately and beats a GAN-based baseline for single-volume prediction of brain aging. We also demonstrate and analyze multi-scan conditioning to incorporate subject-specific progression rates. Finally, we show that the latent progression rates in MRExtrap's linear framework correlate with disease and age-based aging patterns from previously studied structural atrophy rates. MRExtrap offers a simple and robust method for the age-based generation of 3D brain MRIs, particularly valuable in scenarios with multiple longitudinal observations.

Optimizing meningioma grading with radiomics and deep features integration, attention mechanisms, and reproducibility analysis.

Albadr RJ, Sur D, Yadav A, Rekha MM, Jain B, Jayabalan K, Kubaev A, Taher WM, Alwan M, Jawad MJ, Al-Nuaimi AMA, Mohammadifard M, Farhood B, Akhavan-Sigari R

pubmed logopapersAug 26 2025
This study aims to develop a robust and clinically applicable framework for preoperative grading of meningiomas using T1-contrast-enhanced and T2-weighted MRI images. The approach integrates radiomic feature extraction, attention-guided deep learning models, and reproducibility assessment to achieve high diagnostic accuracy, model interpretability, and clinical reliability. We analyzed MRI scans from 2546 patients with histopathologically confirmed meningiomas (1560 low-grade, 986 high-grade). High-quality T1-contrast and T2-weighted images were preprocessed through harmonization, normalization, resizing, and augmentation. Tumor segmentation was performed using ITK-SNAP, and inter-rater reliability of radiomic features was evaluated using the intraclass correlation coefficient (ICC). Radiomic features were extracted via the SERA software, while deep features were derived from pre-trained models (ResNet50 and EfficientNet-B0), with attention mechanisms enhancing focus on tumor-relevant regions. Feature fusion and dimensionality reduction were conducted using PCA and LASSO. Ensemble models employing Random Forest, XGBoost, and LightGBM were implemented to optimize classification performance using both radiomic and deep features. Reproducibility analysis showed that 52% of radiomic features demonstrated excellent reliability (ICC > 0.90). Deep features from EfficientNet-B0 outperformed ResNet50, achieving AUCs of 94.12% (T1) and 93.17% (T2). Hybrid models combining radiomic and deep features further improved performance, with XGBoost reaching AUCs of 95.19% (T2) and 96.87% (T1). Ensemble models incorporating both deep architectures achieved the highest classification performance, with AUCs of 96.12% (T2) and 96.80% (T1), demonstrating superior robustness and accuracy. This work introduces a comprehensive and clinically meaningful AI framework that significantly enhances the preoperative grading of meningiomas. The model's high accuracy, interpretability, and reproducibility support its potential to inform surgical planning, reduce reliance on invasive diagnostics, and facilitate more personalized therapeutic decision-making in routine neuro-oncology practice. Not applicable.

EffNetViTLoRA: An Efficient Hybrid Deep Learning Approach for Alzheimer's Disease Diagnosis

Mahdieh Behjat Khatooni, Mohsen Soryani

arxiv logopreprintAug 26 2025
Alzheimer's disease (AD) is one of the most prevalent neurodegenerative disorders worldwide. As it progresses, it leads to the deterioration of cognitive functions. Since AD is irreversible, early diagnosis is crucial for managing its progression. Mild Cognitive Impairment (MCI) represents an intermediate stage between Cognitively Normal (CN) individuals and those with AD, and is considered a transitional phase from normal cognition to Alzheimer's disease. Diagnosing MCI is particularly challenging due to the subtle differences between adjacent diagnostic categories. In this study, we propose EffNetViTLoRA, a generalized end-to-end model for AD diagnosis using the whole Alzheimer's Disease Neuroimaging Initiative (ADNI) Magnetic Resonance Imaging (MRI) dataset. Our model integrates a Convolutional Neural Network (CNN) with a Vision Transformer (ViT) to capture both local and global features from MRI images. Unlike previous studies that rely on limited subsets of data, our approach is trained on the full T1-weighted MRI dataset from ADNI, resulting in a more robust and unbiased model. This comprehensive methodology enhances the model's clinical reliability. Furthermore, fine-tuning large pretrained models often yields suboptimal results when source and target dataset domains differ. To address this, we incorporate Low-Rank Adaptation (LoRA) to effectively adapt the pretrained ViT model to our target domain. This method enables efficient knowledge transfer and reduces the risk of overfitting. Our model achieves a classification accuracy of 92.52% and an F1-score of 92.76% across three diagnostic categories: AD, MCI, and CN for full ADNI dataset.

Toward Non-Invasive Voice Restoration: A Deep Learning Approach Using Real-Time MRI

Saleh, M. W.

medrxiv logopreprintAug 26 2025
Despite recent advances in brain-computer interfaces (BCIs) for speech restoration, existing systems remain invasive, costly, and inaccessible to individuals with congenital mutism or neurodegenerative disease. We present a proof-of-concept pipeline that synthesizes personalized speech directly from real-time magnetic resonance imaging (rtMRI) of the vocal tract, without requiring acoustic input. Segmented rtMRI frames are mapped to articulatory class representations using a Pix2Pix conditional GAN, which are then transformed into synthetic audio waveforms by a convolutional neural network modeling the articulatory-to-acoustic relationship. The outputs are rendered into audible form and evaluated with speaker-similarity metrics derived from Resemblyzer embeddings. While preliminary, our results suggest that even silent articulatory motion encodes sufficient information to approximate a speakers vocal characteristics, offering a non-invasive direction for future speech restoration in individuals who have lost or never developed voice.

Whole-genome sequencing analysis of left ventricular structure and sphericity in 80,000 people

Pirruccello, J.

medrxiv logopreprintAug 26 2025
BackgroundSphericity is a measurement of how closely an object approximates a globe. The sphericity of the blood pool of the left ventricle (LV), is an emerging measure linked to myocardial dysfunction. MethodsVideo-based deep learning models were trained for semantic segmentation (pixel labeling) in cardiac magnetic resonance imaging in 84,327 UK Biobank participants. These labeled pixels were co-oriented in 3D and used to construct surface meshes. LV ejection fraction, mass, volume, surface area, and sphericity were calculated. Epidemiologic and genetic analyses were conducted. Polygenic score validation was performed in All of Us. Results3D LV sphericity was found to be more strongly associated (HR 10.3 per SD, 95% CI 6.1-17.3) than LV ejection fraction (HR 2.9 per SD reduction, 95% CI 2.4-3.6) with dilated cardiomyopathy (DCM). Paired with whole genome sequencing, these measurements linked LV structure and function to 366 distinct common and low-frequency genetic loci--and 17 genes with rare variant burden--spanning a 25-fold range of effect size. The discoveries included 22 out of the 26 loci that were recently associated with DCM. LV genome-wide polygenic scores were equivalent to, or outperformed, dedicated hypertrophic cardiomyopathy (HCM) and DCM polygenic scores for disease prediction. In All of Us, those in the polygenic extreme 1% had an estimated 6.6% risk of DCM by age 80, compared to 33% for carriers of rare truncating variants in the gene TTN. Conclusions3D sphericity is a distinct, heritable LV measurement that is intricately linked to risk for HCM and DCM. The genetic findings from this study raise the possibility that the majority of common genetic loci that will be discovered in future large-scale DCM analyses are present in the current results.
Page 36 of 1621612 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.