Sort by:
Page 10 of 91901 results

Multi-Center 3D CNN for Parkinson's disease diagnosis and prognosis using clinical and T1-weighted MRI data.

Basaia S, Sarasso E, Sciancalepore F, Balestrino R, Musicco S, Pisano S, Stankovic I, Tomic A, Micco R, Tessitore A, Salvi M, Meiburger KM, Kostic VS, Molinari F, Agosta F, Filippi M

pubmed logopapersAug 5 2025
Parkinson's disease (PD) presents challenges in early diagnosis and progression prediction. Recent advancements in machine learning, particularly convolutional-neural-networks (CNNs), show promise in enhancing diagnostic accuracy and prognostic capabilities using neuroimaging data. The aims of this study were: (i) develop a 3D-CNN based on MRI to distinguish controls and PD patients and (ii) employ CNN to predict the progression of PD. Three cohorts were selected: 86 mild, 62 moderate-to-severe PD patients, and 60 controls; 14 mild-PD patients and 14 controls from Parkinson's Progression Markers Initiative database, and 38 de novo mild-PD patients and 38 controls. All participants underwent MRI scans and clinical evaluation at baseline and over 2-years. PD subjects were classified in two clusters of different progression using k-means clustering based on baseline and follow-up UDPRS-III scores. A 3D-CNN was built and tested on PD patients and controls, with binary classifications: controls vs moderate-to-severe PD, controls vs mild-PD, and two clusters of PD progression. The effect of transfer learning was also tested. CNN effectively differentiated moderate-to-severe PD from controls (74% accuracy) using MRI data alone. Transfer learning significantly improved performance in distinguishing mild-PD from controls (64% accuracy). For predicting disease progression, the model achieved over 70% accuracy by combining MRI and clinical data. Brain regions most influential in the CNN's decisions were visualized. CNN, integrating multimodal data and transfer learning, provides encouraging results toward early-stage classification and progression monitoring in PD. Its explainability through activation maps offers potential for clinical application in early diagnosis and personalized monitoring.

Modeling differences in neurodevelopmental maturity of the reading network using support vector regression on functional connectivity data

Lasnick, O. H. M., Luo, J., Kinnie, B., Kamal, S., Low, S., Marrouch, N., Hoeft, F.

biorxiv logopreprintAug 5 2025
The construction of growth charts trained to predict age or developmental deviation (the brain-age index) based on structural/functional properties of the brain may be informative of childrens neurodevelopmental trajectories. When applied to both typically and atypically developing populations, results may indicate that a particular condition is associated with atypical maturation of certain brain networks. Here, we focus on the relationship between reading disorder (RD) and maturation of functional connectivity (FC) patterns in the prototypical reading/language network using a cross-sectional sample of N = 742 participants aged 6-21 years. A support vector regression model is trained to predict chronological age from FC data derived from a whole-brain model as well as multiple reduced models, which are trained on FC data generated from a successively smaller number of regions in the brains reading network. We hypothesized that the trained models would show systematic underestimation of brain network maturity for poor readers, particularly for the models trained with reading/language regions. Comparisons of the different models predictions revealed that while the whole-brain model outperforms the others in terms of overall prediction accuracy, all models successfully predicted brain maturity, including the one trained with the smallest amount of FC data. In addition, all models showed that reading ability affected the brain-age gap, with poor readers ages being underestimated and advanced readers ages being overestimated. Exploratory results demonstrated that the most important regions and connections for prediction were derived from the default mode and frontoparietal control networks. GlossaryDevelopmental dyslexia / reading disorder (RD): A specific learning disorder affecting reading ability in the absence of any other explanatory condition such as intellectual disability or visual impairment Support vector regression (SVR): A supervised machine learning technique which predicts continuous outcomes (such as chronological age) rather than classifying each observation; finds the best-fit function within a defined error margin Principal component analysis (PCA): A dimensionality reduction technique that transforms a high-dimensional dataset with many features per observation into a reduced set of principal components for each observation; each component is a linear combination of several original (correlated) features, and the final set of components are all orthogonal (uncorrelated) to one another Brain-age index: A numerical index quantifying deviation from the brains typical developmental trajectory for a single individual; may be based on a variety of morphometric or functional properties of the brain, resulting in different estimates for the same participant depending on the imaging modality used Brain-age gap (BAG): The difference, given in units of time, between a participants true chronological age and a predictive models estimated age for that participant based on brain data (Actual - Predicted); may be used as a brain-age index HighlightsO_LIA machine learning model trained on functional data predicted participants ages C_LIO_LIThe model showed variability in age prediction accuracy based on reading skills C_LIO_LIThe model highly weighted data from frontoparietal and default mode regions C_LIO_LINeural markers of reading and language are diffusely represented in the brain C_LI

Altered effective connectivity in patients with drug-naïve first-episode, recurrent, and medicated major depressive disorder: a multi-site fMRI study.

Dai P, Huang K, Hu T, Chen Q, Liao S, Grecucci A, Yi X, Chen BT

pubmed logopapersAug 5 2025
Major depressive disorder (MDD) has been diagnosed through subjective and inconsistent clinical assessments. Resting-state functional magnetic resonance imaging (rs-fMRI) with connectivity analysis has been valuable for identifying neural correlates of patients with MDD, yet most studies rely on single-site and small sample sizes. This study utilized large-scale, multi-site rs-fMRI data from the Rest-meta-MDD consortium to assess effective connectivity in patients with MDD and its subtypes, i.e., drug-naïve first-episode (FEDN), recurrent (RMDD), and medicated MDD (MMDD) subtypes. To mitigate site-related variability, the ComBat algorithm was applied, and multivariate linear regression was used to control for age and gender effects. A random forest classification model was developed to identify the most predictive features. Nested five-fold cross-validation was used to assess model performance. The model effectively distinguished FEDN subtype from healthy controls (HC) group, achieving 90.13% accuracy and 96.41% AUC. However, classification performance for RMDD vs. FEDN and MMDD vs. FEDN was lower, suggesting that differences between the subtypes were less pronounced than differences between the patients with MDD and the HC group. Patients with RMDD exhibited more extensive connectivity abnormalities in the frontal-limbic system and default mode network than the patients with FEDN, implying heightened rumination. Additionally, treatment with medication appeared to partially modulate the aberrant connectivity, steering it toward normalization. This study showed altered brain connectivity in patients with MDD and its subtypes, which could be classified with machine learning models with robust performance. Abnormal connectivity could be the potential neural correlates for the presenting symptoms of patients with MDD. These findings provide novel insights into the neural pathogenesis of patients with MDD.

Multi-modal MRI cascaded incremental reconstruction with coarse-to-fine spatial registration.

Wang Y, Sun Y, Liu J, Jing L, Liu Q

pubmed logopapersAug 5 2025
Magnetic resonance imaging (MRI) typically utilizes multiple contrasts to assess different tissue features, but prolonged scanning increases the risk of motion artifacts. Compressive sensing MRI (CS-MRI) employs computational reconstruction algorithm to accelerate imaging. Full-sampled auxiliary MR images can effectively assist in the reconstruction of under-sampled target MR images. However, due to spatial offset and differences in imaging parameters, how to achieve cross-modal fusion is a key issue. In order to cope with this issue, we propose an end-to-end network integrating spatial registration and cascaded incremental reconstruction for multi-modal CS-MRI. Specifically, the proposed network comprises two stages: a coarse-to-fine spatial registration sub-network and a cascaded incremental reconstruction sub-network. The registration sub-network iteratively predicts deformation flow fields between under-sampled target images and full-sampled auxiliary images, gradually aligning them to mitigate spatial offsets. The cascaded incremental reconstruction sub-network adopts a new separated criss-cross window Transformer as the basic component and deploys them in dual-path to fuse inter-modal and intra-modal features from the registered auxiliary images and under-sampled target images. Through cascade learning, we can recover incremental details from fused features and continuously refine the target images. We validate our model using the IXI brain dataset, and the experimental results demonstrate that, compared to existing methods, our network exhibits superior performance.

STARFormer: A novel spatio-temporal aggregation reorganization transformer of FMRI for brain disorder diagnosis.

Dong W, Li Y, Zeng W, Chen L, Yan H, Siok WT, Wang N

pubmed logopapersAug 5 2025
Many existing methods that use functional magnetic resonance imaging (fMRI) to classify brain disorders, such as autism spectrum disorder (ASD) and attention deficit hyperactivity disorder (ADHD), often overlook the integration of spatial and temporal dependencies of the blood oxygen level-dependent (BOLD) signals, which may lead to inaccurate or imprecise classification results. To solve this problem, we propose a spatio-temporal aggregation reorganization transformer (STARFormer) that effectively captures both spatial and temporal features of BOLD signals by incorporating three key modules. The region of interest (ROI) spatial structure analysis module uses eigenvector centrality (EC) to reorganize brain regions based on effective connectivity, highlighting critical spatial relationships relevant to the brain disorder. The temporal feature reorganization module systematically segments the time series into equal-dimensional window tokens and captures multiscale features through variable window and cross-window attention. The spatio-temporal feature fusion module employs a parallel transformer architecture with dedicated temporal and spatial branches to extract integrated features. The proposed STARFormer has been rigorously evaluated on two publicly available datasets for the classification of ASD and ADHD. The experimental results confirm that STARFormer achieves state-of-the-art performance across multiple evaluation metrics, providing a more accurate and reliable tool for the diagnosis of brain disorders and biomedical research. The official implementation codes are available at: https://github.com/NZWANG/STARFormer.

Glioblastoma Overall Survival Prediction With Vision Transformers

Yin Lin, iccardo Barbieri, Domenico Aquino, Giuseppe Lauria, Marina Grisoli, Elena De Momi, Alberto Redaelli, Simona Ferrante

arxiv logopreprintAug 4 2025
Glioblastoma is one of the most aggressive and common brain tumors, with a median survival of 10-15 months. Predicting Overall Survival (OS) is critical for personalizing treatment strategies and aligning clinical decisions with patient outcomes. In this study, we propose a novel Artificial Intelligence (AI) approach for OS prediction using Magnetic Resonance Imaging (MRI) images, exploiting Vision Transformers (ViTs) to extract hidden features directly from MRI images, eliminating the need of tumor segmentation. Unlike traditional approaches, our method simplifies the workflow and reduces computational resource requirements. The proposed model was evaluated on the BRATS dataset, reaching an accuracy of 62.5% on the test set, comparable to the top-performing methods. Additionally, it demonstrated balanced performance across precision, recall, and F1 score, overcoming the best model in these metrics. The dataset size limits the generalization of the ViT which typically requires larger datasets compared to convolutional neural networks. This limitation in generalization is observed across all the cited studies. This work highlights the applicability of ViTs for downsampled medical imaging tasks and establishes a foundation for OS prediction models that are computationally efficient and do not rely on segmentation.

Accurate and Interpretable Postmenstrual Age Prediction via Multimodal Large Language Model

Qifan Chen, Jin Cui, Cindy Duan, Yushuo Han, Yifei Shi

arxiv logopreprintAug 4 2025
Accurate estimation of postmenstrual age (PMA) at scan is crucial for assessing neonatal development and health. While deep learning models have achieved high accuracy in predicting PMA from brain MRI, they often function as black boxes, offering limited transparency and interpretability in clinical decision support. In this work, we address the dual challenge of accuracy and interpretability by adapting a multimodal large language model (MLLM) to perform both precise PMA prediction and clinically relevant explanation generation. We introduce a parameter-efficient fine-tuning (PEFT) strategy using instruction tuning and Low-Rank Adaptation (LoRA) applied to the Qwen2.5-VL-7B model. The model is trained on four 2D cortical surface projection maps derived from neonatal MRI scans. By employing distinct prompts for training and inference, our approach enables the MLLM to handle a regression task during training and generate clinically relevant explanations during inference. The fine-tuned model achieves a low prediction error with a 95 percent confidence interval of 0.78 to 1.52 weeks, while producing interpretable outputs grounded in developmental features, marking a significant step toward transparent and trustworthy AI systems in perinatal neuroscience.

Explainable AI Methods for Neuroimaging: Systematic Failures of Common Tools, the Need for Domain-Specific Validation, and a Proposal for Safe Application

Nys Tjade Siegel, James H. Cole, Mohamad Habes, Stefan Haufe, Kerstin Ritter, Marc-André Schulz

arxiv logopreprintAug 4 2025
Trustworthy interpretation of deep learning models is critical for neuroimaging applications, yet commonly used Explainable AI (XAI) methods lack rigorous validation, risking misinterpretation. We performed the first large-scale, systematic comparison of XAI methods on ~45,000 structural brain MRIs using a novel XAI validation framework. This framework establishes verifiable ground truth by constructing prediction tasks with known signal sources - from localized anatomical features to subject-specific clinical lesions - without artificially altering input images. Our analysis reveals systematic failures in two of the most widely used methods: GradCAM consistently failed to localize predictive features, while Layer-wise Relevance Propagation generated extensive, artifactual explanations that suggest incompatibility with neuroimaging data characteristics. Our results indicate that these failures stem from a domain mismatch, where methods with design principles tailored to natural images require substantial adaptation for neuroimaging data. In contrast, the simpler, gradient-based method SmoothGrad, which makes fewer assumptions about data structure, proved consistently accurate, suggesting its conceptual simplicity makes it more robust to this domain shift. These findings highlight the need for domain-specific adaptation and validation of XAI methods, suggest that interpretations from prior neuroimaging studies using standard XAI methodology warrant re-evaluation, and provide urgent guidance for practical application of XAI in neuroimaging.

Open-radiomics: a collection of standardized datasets and a technical protocol for reproducible radiomics machine learning pipelines.

Namdar K, Wagner MW, Ertl-Wagner BB, Khalvati F

pubmed logopapersAug 4 2025
As an important branch of machine learning pipelines in medical imaging, radiomics faces two major challenges namely reproducibility and accessibility. In this work, we introduce open-radiomics, a set of radiomics datasets along with a comprehensive radiomics pipeline based on our proposed technical protocol to investigate the effects of radiomics feature extraction on the reproducibility of the results. We curated large-scale radiomics datasets based on three open-source datasets; BraTS 2020 for high-grade glioma (HGG) versus low-grade glioma (LGG) classification and survival analysis, BraTS 2023 for O6-methylguanine-DNA methyltransferase (MGMT) classification, and non-small cell lung cancer (NSCLC) survival analysis from the Cancer Imaging Archive (TCIA). We used the BraTS 2020 open-source Magnetic Resonance Imaging (MRI) dataset to demonstrate how our proposed technical protocol could be utilized in radiomics-based studies. The cohort includes 369 adult patients with brain tumors (76 LGG, and 293 HGG). Using PyRadiomics library for LGG vs. HGG classification, we created 288 radiomics datasets; the combinations of 4 MRI sequences, 3 binWidths, 6 image normalization methods, and 4 tumor subregions. We used Random Forest classifiers, and for each radiomics dataset, we repeated the training-validation-test (60%/20%/20%) experiment with different data splits and model random states 100 times (28,800 test results) and calculated the Area Under the Receiver Operating Characteristic Curve (AUROC). Unlike binWidth and image normalization, the tumor subregion and imaging sequence significantly affected performance of the models. T1 contrast-enhanced sequence and the union of Necrotic and the non-enhancing tumor core subregions resulted in the highest AUROCs (average test AUROC 0.951, 95% confidence interval of (0.949, 0.952)). Although several settings and data splits (28 out of 28800) yielded test AUROC of 1, they were irreproducible. Our experiments demonstrate the sources of variability in radiomics pipelines (e.g., tumor subregion) can have a significant impact on the results, which may lead to superficial perfect performances that are irreproducible. Not applicable.

Machine learning of whole-brain resting-state fMRI signatures for individualized grading of frontal gliomas.

Hu Y, Cao X, Chen H, Geng D, Lv K

pubmed logopapersAug 4 2025
Accurate preoperative grading of gliomas is critical for therapeutic planning and prognostic evaluation. We developed a noninvasive machine learning model leveraging whole-brain resting-state functional magnetic resonance imaging (rs-fMRI) biomarkers to discriminate high-grade (HGGs) and low-grade gliomas (LGGs) in the frontal lobe. This retrospective study included 138 patients (78 LGGs, 60 HGGs) with left frontal gliomas. A total of 7134 features were extracted from the mean amplitude of low-frequency fluctuation (mALFF), mean fractional ALFF, mean percentage amplitude of fluctuation (mPerAF), mean regional homogeneity (mReHo) maps and resting-state functional connectivity (RSFC) matrix. Twelve predictive features were selected through Mann-Whitney U test, correlation analysis and least absolute shrinkage and selection operator method. The patients were stratified and randomized into the training and testing datasets with a 7:3 ratio. The logical regression, random forest, support vector machine (SVM) and adaptive boosting algorithms were used to establish models. The model performance was evaluated using area under the receiver operating characteristic curve, accuracy, sensitivity, and specificity. The selected 12 features included 7 RSFC features, 4 mPerAF features, and 1 mReHo feature. Based on these features, the model was established using the SVM had an optimal performance. The accuracy in the training and testing datasets was 0.957 and 0.727, respectively. The area under the receiver operating characteristic curves was 0.972 and 0.799, respectively. Our whole-brain rs-fMRI radiomics approach provides an objective tool for preoperative glioma stratification. The biological interpretability of selected features reflects distinct neuroplasticity patterns between LGGs and HGGs, advancing understanding of glioma-network interactions.
Page 10 of 91901 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.