Sort by:
Page 17 of 58579 results

Brain Stroke Classification Using Wavelet Transform and MLP Neural Networks on DWI MRI Images

Mana Mohammadi, Amirhesam Jafari Rad, Ashkan Behrouzi

arxiv logopreprintJun 18 2025
This paper presents a lightweight framework for classifying brain stroke types from Diffusion-Weighted Imaging (DWI) MRI scans, employing a Multi-Layer Perceptron (MLP) neural network with Wavelet Transform for feature extraction. Accurate and timely stroke detection is critical for effective treatment and improved patient outcomes in neuroimaging. While Convolutional Neural Networks (CNNs) are widely used for medical image analysis, their computational complexity often hinders deployment in resource-constrained clinical settings. In contrast, our approach combines Wavelet Transform with a compact MLP to achieve efficient and accurate stroke classification. Using the "Brain Stroke MRI Images" dataset, our method yields classification accuracies of 82.0% with the "db4" wavelet (level 3 decomposition) and 86.00% with the "Haar" wavelet (level 2 decomposition). This analysis highlights a balance between diagnostic accuracy and computational efficiency, offering a practical solution for automated stroke diagnosis. Future research will focus on enhancing model robustness and integrating additional MRI modalities for comprehensive stroke assessment.

Automated Multi-grade Brain Tumor Classification Using Adaptive Hierarchical Optimized Horse Herd BiLSTM Fusion Network in MRI Images.

Thanya T, Jeslin T

pubmed logopapersJun 18 2025
Brain tumor classification using Magnetic Resonance Imaging (MRI) images is an important and emerging field of medical imaging and artificial intelligence in the current world. With advancements in technology, particularly in deep learning and machine learning, researchers and clinicians are leveraging these tools to create complex models that, using MRI data, can reliably detect and classify tumors in the brain. However, it has a number of drawbacks, including the intricacy of tumor types and grades, intensity variations in MRI data and tumors varying in severity. This paper proposes a Multi-Grade Hierarchical Classification Network Model (MGHCN) for the hierarchical classification of tumor grades in MRI images. The model's distinctive feature lies in its ability to categorize tumors into multiple grades, thereby capturing the hierarchical nature of tumor severity. To address variations in intensity levels across different MRI samples, an Improved Adaptive Intensity Normalization (IAIN) pre-processing step is employed. This step standardizes intensity values, effectively mitigating the impact of intensity variations and ensuring more consistent analyses. The model renders utilization of the Dual Tree Complex Wavelet Transform with Enhanced Trigonometric Features (DTCWT-ETF) for efficient feature extraction. DTCWT-ETF captures both spatial and frequency characteristics, allowing the model to distinguish between different tumor types more effectively. In the classification stage, the framework introduces the Adaptive Hierarchical Optimized Horse Herd BiLSTM Fusion Network (AHOHH-BiLSTM). This multi-grade classification model is designed with a comprehensive architecture, including distinct layers that enhance the learning process and adaptively refine parameters. The purpose of this study is to improve the precision of distinguishing different grades of tumors in MRI images. To evaluate the proposed MGHCN framework, a set of evaluation metrics is incorporated which includes precision, recall, and the F1-score. The structure employs BraTS Challenge 2021, Br35H, and BraTS Challenge 2023 datasets, a significant combination that ensures comprehensive training and evaluation. The MGHCN framework aims to enhance brain tumor classification in MRI images by utilizing these datasets along with a comprehensive set of evaluation metrics, providing a more thorough and sophisticated understanding of its capabilities and performance.

Can CTA-based Machine Learning Identify Patients for Whom Successful Endovascular Stroke Therapy is Insufficient?

Jeevarajan JA, Dong Y, Ballekere A, Marioni SS, Niktabe A, Abdelkhaleq R, Sheth SA, Giancardo L

pubmed logopapersJun 18 2025
Despite advances in endovascular stroke therapy (EST) devices and techniques, many patients are left with substantial disability, even if the final infarct volumes (FIVs) remain small. Here, we evaluate the performance of a machine learning (ML) approach using pre-treatment CT angiography (CTA) to identify this cohort of patients that may benefit from additional interventions. We identified consecutive large vessel occlusion (LVO) acute ischemic stroke (AIS) subjects who underwent EST with successful reperfusion in a multicenter prospective registry cohort. We included only subjects with FIV<30mL and recorded 90-day outcome (modified Rankin scale, mRS). A deep learning model was pre-trained and then fine-tuned to predict 90-day mRS 0-2 using pre-treatment CTA images (DSN-CTA model). The primary outcome was the predictive performance of the DSNCTA model compared to a logistic regression model with clinical variables, measured by the area under the receiver operating characteristic curve (AUROC). The DSN-CTA model was pre-trained on 1,542 subjects and then fine-tuned and cross-validated with 48 subjects, all of whom underwent EST with TICI 2b-3 reperfusion. Of this cohort, 56.2% of subjects had 90-day mRS 3-6 despite successful EST and FIV<30mL. The DSN-CTA model showed significantly better performance than a model with clinical variables alone when predicting good 90-day mRS (AUROC 0.81 vs 0.492, p=0.006). The CTA-based machine learning model was able to more reliably predict unexpected poor functional outcome after successful EST and small FIV for patients with LVO AIS compared to standard clinical variables. ML models may identify <i>a priori</i> patients in whom EST-based LVO reperfusion alone is insufficient to improve clinical outcomes. AIS= acute ischemic stroke; AUROC= area under the receiver operating characteristic curve; DSN-CTA= DeepSymNet-v3 model; EST= endovascular stroke therapy; FIV= final infarct volume; LVO= large vessel occlusion; ML= machine learning.

RESIGN: Alzheimer's Disease Detection Using Hybrid Deep Learning based Res-Inception Seg Network.

Amsavalli K, Suba Raja SK, Sudha S

pubmed logopapersJun 18 2025
Alzheimer's disease (AD) is a leading cause of death, making early detection critical to improve survival rates. Conventional manual techniques struggle with early diagnosis due to the brain's complex structure, necessitating the use of dependable deep learning (DL) methods. This research proposes a novel RESIGN model is a combination of Res-InceptionSeg for detecting AD utilizing MRI images. The input MRI images were pre-processed using a Non-Local Means (NLM) filter to reduce noise artifacts. A ResNet-LSTM model was used for feature extraction, targeting White Matter (WM), Grey Matter (GM), and Cerebrospinal Fluid (CSF). The extracted features were concatenated and classified into Normal, MCI, and AD categories using an Inception V3-based classifier. Additionally, SegNet was employed for abnormal brain region segmentation. The RESIGN model achieved an accuracy of 99.46%, specificity of 98.68%, precision of 95.63%, recall of 97.10%, and an F1 score of 95.42%. It outperformed ResNet, AlexNet, Dense- Net, and LSTM by 7.87%, 5.65%, 3.92%, and 1.53%, respectively, and further improved accuracy by 25.69%, 5.29%, 2.03%, and 1.71% over ResNet18, CLSTM, VGG19, and CNN, respectively. The integration of spatial-temporal feature extraction, hybrid classification, and deep segmentation makes RESIGN highly reliable in detecting AD. A 5-fold cross-validation proved its robustness, and its performance exceeded that of existing models on the ADNI dataset. However, there are potential limitations related to dataset bias and limited generalizability due to uniform imaging conditions. The proposed RESIGN model demonstrates significant improvement in early AD detection through robust feature extraction and classification by offering a reliable tool for clinical diagnosis.

Cardiovascular risk in childhood and young adulthood is associated with the hemodynamic response function in midlife: The Bogalusa Heart Study.

Chuang KC, Naseri M, Ramakrishnapillai S, Madden K, Amant JS, McKlveen K, Gwizdala K, Dhullipudi R, Bazzano L, Carmichael O

pubmed logopapersJun 18 2025
In functional MRI, a hemodynamic response function (HRF) describes how neural events are translated into a blood oxygenation response detected through imaging. The HRF has the potential to quantify neurovascular mechanisms by which cardiovascular risks modify brain health, but relationships among HRF characteristics, brain health, and cardiovascular modifiers of brain health have not been well studied to date. One hundred and thirty-seven middle-aged participants (mean age: 53.6±4.7, female (62%), 78% White American participants and 22% African American participants) in the exploratory analysis from Bogalusa Heart Study completed clinical evaluations from childhood to midlife and an adaptive Stroop task during fMRI in midlife. The HRFs of each participant within seventeen brain regions of interest (ROIs) previously identified as activated by this task were calculated using a convolutional neural network approach. Faster and more efficient neurovascular functioning was characterized in terms of five HRF characteristics: faster time to peak (TTP), shorter full width at half maximum (FWHM), smaller peak magnitude (PM), smaller trough magnitude (TM), and smaller area under the HRF curve (AUHRF). The composite HRF summary characteristics over all ROIs were calculated for multivariable and simple linear regression analyses. In multivariable models, faster and more efficient HRF characteristic was found in non-smoker compared to smokers (AUHRF, p = 0.029). Faster and more efficient HRF characteristics were associated with lower systolic and diastolic blood pressures (FWHM, TM, and AUHRF, p = 0.030, 0.042, and 0.032) and cerebral amyloid burden (FWHM, p-value = 0.027) in midlife; as well as greater response rate on the Stroop task (FWHM, p = 0.042) in midlife. In simple linear regression models, faster and more efficient HRF characteristics were found in women compared to men (TM, p = 0.019); in White American participants compared to African American participants (AUHRF, p = 0.044); and in non-smokers compared to smokers (TTP and AUHRF, p = 0.019 and 0.010). Faster and more efficient HRF characteristics were associated with lower systolic and diastolic blood pressures (FWHM and TM, p = 0.019 and 0.029), and lower BMI (FWHM and AUHRF, p = 0.025 and 0.017), in childhood and adolescence; and lower BMI (TTP, p = 0.049), cerebral amyloid burden (FWHM, p = 0.002), and white matter hyperintensity burden (FWHM, p = 0.046) in midlife; as well as greater accuracy on the Stroop task (AUHRF, p = 0.037) in midlife. In a diverse middle-aged community sample, HRF-based indicators of faster and more efficient neurovascular functioning were associated with better brain health and cognitive function, as well as better lifespan cardiovascular health.

Hierarchical refinement with adaptive deformation cascaded for multi-scale medical image registration.

Hussain N, Yan Z, Cao W, Anwar M

pubmed logopapersJun 18 2025
Deformable image registration is a fundamental task in medical image analysis, which is crucial in enabling early detection and accurate disease diagnosis. Although transformer-based architectures have demonstrated strong potential through attention mechanisms, challenges remain in ineffective feature extraction and spatial alignment, particularly within hierarchical attention frameworks. To address these limitations, we propose a novel registration framework that integrates hierarchical feature encoding in the encoder and an adaptive cascaded refinement strategy in the decoder. The model employs hierarchical cross-attention between fixed and moving images at multiple scales, enabling more precise alignment and improved registration accuracy. The decoder incorporates the Adaptive Cascaded Module (ACM), facilitating progressive deformation field refinement across multiple resolution levels. This approach captures coarse global transformations and acceptable local variations, resulting in smooth and anatomically consistent alignment. However, rather than relying solely on the final decoder output, our framework leverages intermediate representations at each stage of the network, enhancing the robustness and precision of the registration process. Our method achieves superior accuracy and adaptability by integrating deformations across all scales. Comprehensive experiments on two widely used 3D brain MRI datasets, OASIS and LPBA40, demonstrate that the proposed framework consistently outperforms existing state-of-the-art approaches across multiple evaluation metrics regarding accuracy, robustness, and generalizability.

Multimodal deep learning for predicting unsuccessful recanalization in refractory large vessel occlusion.

González JD, Canals P, Rodrigo-Gisbert M, Mayol J, García-Tornel A, Ribó M

pubmed logopapersJun 18 2025
This study explores a multi-modal deep learning approach that integrates pre-intervention neuroimaging and clinical data to predict endovascular therapy (EVT) outcomes in acute ischemic stroke patients. To this end, consecutive stroke patients undergoing EVT were included in the study, including patients with suspected Intracranial Atherosclerosis-related Large Vessel Occlusion ICAD-LVO and other refractory occlusions. A retrospective, single-center cohort of patients with anterior circulation LVO who underwent EVT between 2017-2023 was analyzed. Refractory LVO (rLVO) defined class, comprised patients who presented any of the following: final angiographic stenosis > 50 %, unsuccessful recanalization (eTICI 0-2a) or required rescue treatments (angioplasty +/- stenting). Neuroimaging data included non-contrast CT and CTA volumes, automated vascular segmentation, and CT perfusion parameters. Clinical data included demographics, comorbidities and stroke severity. Imaging features were encoded using convolutional neural networks and fused with clinical data using a DAFT module. Data were split 80 % for training (with four-fold cross-validation) and 20 % for testing. Explainability methods were used to analyze the contribution of clinical variables and regions of interest in the images. The final sample comprised 599 patients; 481 for training the model (77, 16.0 % rLVO), and 118 for testing (16, 13.6 % rLVO). The best model predicting rLVO using just imaging achieved an AUC of 0.53 ± 0.02 and F1 of 0.19 ± 0.05 while the proposed multimodal model achieved an AUC of 0.70 ± 0.02 and F1 of 0.39 ± 0.02 in testing. Combining vascular segmentation, clinical variables, and imaging data improved prediction performance over single-source models. This approach offers an early alert to procedural complexity, potentially guiding more tailored, timely intervention strategies in the EVT workflow.

Federated Learning for MRI-based BrainAGE: a multicenter study on post-stroke functional outcome prediction

Vincent Roca, Marc Tommasi, Paul Andrey, Aurélien Bellet, Markus D. Schirmer, Hilde Henon, Laurent Puy, Julien Ramon, Grégory Kuchcinski, Martin Bretzner, Renaud Lopes

arxiv logopreprintJun 18 2025
$\textbf{Objective:}$ Brain-predicted age difference (BrainAGE) is a neuroimaging biomarker reflecting brain health. However, training robust BrainAGE models requires large datasets, often restricted by privacy concerns. This study evaluates the performance of federated learning (FL) for BrainAGE estimation in ischemic stroke patients treated with mechanical thrombectomy, and investigates its association with clinical phenotypes and functional outcomes. $\textbf{Methods:}$ We used FLAIR brain images from 1674 stroke patients across 16 hospital centers. We implemented standard machine learning and deep learning models for BrainAGE estimates under three data management strategies: centralized learning (pooled data), FL (local training at each site), and single-site learning. We reported prediction errors and examined associations between BrainAGE and vascular risk factors (e.g., diabetes mellitus, hypertension, smoking), as well as functional outcomes at three months post-stroke. Logistic regression evaluated BrainAGE's predictive value for these outcomes, adjusting for age, sex, vascular risk factors, stroke severity, time between MRI and arterial puncture, prior intravenous thrombolysis, and recanalisation outcome. $\textbf{Results:}$ While centralized learning yielded the most accurate predictions, FL consistently outperformed single-site models. BrainAGE was significantly higher in patients with diabetes mellitus across all models. Comparisons between patients with good and poor functional outcomes, and multivariate predictions of these outcomes showed the significance of the association between BrainAGE and post-stroke recovery. $\textbf{Conclusion:}$ FL enables accurate age predictions without data centralization. The strong association between BrainAGE, vascular risk factors, and post-stroke recovery highlights its potential for prognostic modeling in stroke care.

Implicit neural representations for accurate estimation of the standard model of white matter

Tom Hendriks, Gerrit Arends, Edwin Versteeg, Anna Vilanova, Maxime Chamberland, Chantal M. W. Tax

arxiv logopreprintJun 18 2025
Diffusion magnetic resonance imaging (dMRI) enables non-invasive investigation of tissue microstructure. The Standard Model (SM) of white matter aims to disentangle dMRI signal contributions from intra- and extra-axonal water compartments. However, due to the model its high-dimensional nature, extensive acquisition protocols with multiple b-values and diffusion tensor shapes are typically required to mitigate parameter degeneracies. Even then, accurate estimation remains challenging due to noise. This work introduces a novel estimation framework based on implicit neural representations (INRs), which incorporate spatial regularization through the sinusoidal encoding of the input coordinates. The INR method is evaluated on both synthetic and in vivo datasets and compared to parameter estimates using cubic polynomials, supervised neural networks, and nonlinear least squares. Results demonstrate superior accuracy of the INR method in estimating SM parameters, particularly in low signal-to-noise conditions. Additionally, spatial upsampling of the INR can represent the underlying dataset anatomically plausibly in a continuous way, which is unattainable with linear or cubic interpolation. The INR is fully unsupervised, eliminating the need for labeled training data. It achieves fast inference ($\sim$6 minutes), is robust to both Gaussian and Rician noise, supports joint estimation of SM kernel parameters and the fiber orientation distribution function with spherical harmonics orders up to at least 8 and non-negativity constraints, and accommodates spatially varying acquisition protocols caused by magnetic gradient non-uniformities. The combination of these properties along with the possibility to easily adapt the framework to other dMRI models, positions INRs as a potentially important tool for analyzing and interpreting diffusion MRI data.

Multimodal MRI Marker of Cognition Explains the Association Between Cognition and Mental Health in UK Biobank

Buianova, I., Silvestrin, M., Deng, J., Pat, N.

medrxiv logopreprintJun 18 2025
BackgroundCognitive dysfunction often co-occurs with psychopathology. Advances in neuroimaging and machine learning have led to neural indicators that predict individual differences in cognition with reasonable performance. We examined whether these neural indicators explain the relationship between cognition and mental health in the UK Biobank cohort (n > 14000). MethodsUsing machine learning, we quantified the covariation between general cognition and 133 mental health indices and derived neural indicators of cognition from 72 neuroimaging phenotypes across diffusion-weighted MRI (dwMRI), resting-state functional MRI (rsMRI), and structural MRI (sMRI). With commonality analyses, we investigated how much of the cognition-mental health covariation is captured by each neural indicator and neural indicators combined within and across MRI modalities. ResultsThe predictive association between mental health and cognition was at out-of-sample r = 0.3. Neuroimaging phenotypes captured 2.1% to 25.8% of the cognition-mental health covariation. The highest proportion of variance explained by dwMRI was attributed to the number of streamlines connecting cortical regions (19.3%), by rsMRI through functional connectivity between 55 large-scale networks (25.8%), and by sMRI via the volumetric characteristics of subcortical structures (21.8%). Combining neuroimaging phenotypes within modalities improved the explanation to 25.5% for dwMRI, 29.8% for rsMRI, and 31.6% for sMRI, and combining them across all MRI modalities enhanced the explanation to 48%. ConclusionsWe present an integrated approach to derive multimodal MRI markers of cognition that can be transdiagnostically linked to psychopathology. This demonstrates that the predictive ability of neural indicators extends beyond the prediction of cognition itself, enabling us to capture the cognition-mental health covariation.
Page 17 of 58579 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.