Sort by:
Page 86 of 1201200 results

Alzheimer's disease prediction using 3D-CNNs: Intelligent processing of neuroimaging data.

Rahman AU, Ali S, Saqia B, Halim Z, Al-Khasawneh MA, AlHammadi DA, Khan MZ, Ullah I, Alharbi M

pubmed logopapersJun 1 2025
Alzheimer's disease (AD) is a severe neurological illness that demolishes memory and brain functioning. This disease affects an individual's capacity to work, think, and behave. The proportion of individuals suffering from AD is rapidly increasing. It flatters a leading cause of disability and impacts millions of people worldwide. Early detection reduces disease expansion, provides more effective therapies, and leads to better results. However, predicting AD at an early stage is complex since its clinical symptoms match with normal aging, mild cognitive impairment (MCI), and neurodegenerative disorders. Prior studies indicate that early diagnosis is improved by the utilization of magnetic resonance imaging (MRI). However, MRI data is scarce, noisy, and extremely diverse among scanners and patient populations. The 2D CNNs analyze 3D data slices separately, resulting in a loss of inter-slice information and contextual coherence required to detect subtle and diffuse brain alterations. This study offered a novel 3Dimensional-Convolutional Neural Network (3D-CNN) and intelligent preprocessing pipeline for AD prediction. This work uses an intelligent frame selection and 3D dilated convolutions mechanism to recognize the most informative slices associated with AD disease. This enabled the model to capture subtle and diffuse structural changes across the brain visible in MRI scans. The proposed model examined brain structures by recognizing small volumetric changes associated with AD and acquiring spatial hierarchies within MRI data. After conducting various experiments, we observed that the proposed 3D-CNNs are highly proficient in capturing early brain changes. To validate the model's performance, a benchmark dataset called AD Neuroimaging Initiative (ADNI) is used and achieves a maximum accuracy of 92.89 %, outperforming state-of-the-art approaches.

IM-Diff: Implicit Multi-Contrast Diffusion Model for Arbitrary Scale MRI Super-Resolution.

Liu L, Zou J, Xu C, Wang K, Lyu J, Xu X, Hu Z, Qin J

pubmed logopapersJun 1 2025
Diffusion models have garnered significant attention for MRI Super-Resolution (SR) and have achieved promising results. However, existing diffusion-based SR models face two formidable challenges: 1) insufficient exploitation of complementary information from multi-contrast images, which hinders the faithful reconstruction of texture details and anatomical structures; and 2) reliance on fixed magnification factors, such as 2× or 4×, which is impractical for clinical scenarios that require arbitrary scale magnification. To circumvent these issues, this paper introduces IM-Diff, an implicit multi-contrast diffusion model for arbitrary-scale MRI SR, leveraging the merits of both multi-contrast information and the continuous nature of implicit neural representation (INR). Firstly, we propose an innovative hierarchical multi-contrast fusion (HMF) module with reference-aware cross Mamba (RCM) to effectively incorporate target-relevant information from the reference image into the target image, while ensuring a substantial receptive field with computational efficiency. Secondly, we introduce multiple wavelet INR magnification (WINRM) modules into the denoising process by integrating the wavelet implicit neural non-linearity, enabling effective learning of continuous representations of MR images. The involved wavelet activation enhances space-frequency concentration, further bolstering representation accuracy and robustness in INR. Extensive experiments on three public datasets demonstrate the superiority of our method over existing state-of-the-art SR models across various magnification factors.

Association of the characteristics of brain magnetic resonance imaging with genes related to disease onset in schizophrenia patients.

Lin J, Wang B, Chen S, Cao F, Zhang J, Lu Z

pubmed logopapersJun 1 2025
Schizophrenia (SCH) is a complex neurodevelopmental disorder, whose pathogenesis is not fully elucidated. This article aims to reveal disease-specific brain structural and functional changes and their potential genetic basis by analyzing the characteristics of brain magnetic resonance imaging (MRI) in SCH patients and related gene expression patterns. Differentially expressed genes (DEGs) between SCH and healthy control (NC) groups in the GSE48072 dataset were identified and functionally analyzed, and a protein-protein interaction (PPI) network was fabricated to screen for core genes (CGs). Meanwhile, MRI data from the COBRE, the Human Connectome Project (HCP), the 1000 Functional Connectomes Project (FCP), and the Consortium for Reliability and Reproducibility (CoRR) were utilized to explore differences in brain activity patterns between SCH patients and NC group using a 3D deep aggregation network (3D DANet) machine learning approach. A correlation analysis was performed between the identified CGs and MRI imaging characteristics. 82 DEGs were collected from the GSE48072 dataset, primarily involved in cytotoxic granules, growth factor binding, and graft-versus-host disease pathways. The construction of the PPI network revealed KLRD1, KLRF1, CD244, GZMH, GZMA, GZMB, PRF1, and SLAMF6 as CGs. SCH patients exhibited relatively enhanced activity patterns in the frontoparietal attention network (FAN) and default mode network (DMN) across four datasets, while showing a trend of weakening in most other networks. The 3D DANet demonstrated higher accuracy, specificity, and sensitivity in brain image classification. The correlation between enhancement of the DMN and genetic abnormalities was the strongest, followed by the enhancement of the frontal and parietal attention networks. In contrast, the correlation between the weakening of the sensory-motor network and occipital network and genetic abnormalities was relatively weak. The strongest correlation was observed between MRI characteristics and the KLRD1 and CD244 genes. The granzyme-mediated programmed cell death signaling pathway is related to pathogenesis of SCH, and CD244 may serve as potential biological markers for diagnosing SCH. The correlation between enhancement of the DMN and genetic abnormalities was the strongest, followed by the enhancement of the frontal and parietal attention networks. In contrast, the correlation between weakening of the sensory-motor network and occipital network and genetic abnormalities was relatively weak. Additionally, the strongest correlation was observed between MRI features and the KLRD1 and CD244 genes. The use of the 3D DANet method has improved the detection precision of brain structural and functional changes in SCH patients, providing a new perspective for understanding the biological basis of the disease.

Network Occlusion Sensitivity Analysis Identifies Regional Contributions to Brain Age Prediction.

He L, Wang S, Chen C, Wang Y, Fan Q, Chu C, Fan L, Xu J

pubmed logopapersJun 1 2025
Deep learning frameworks utilizing convolutional neural networks (CNNs) have frequently been used for brain age prediction and have achieved outstanding performance. Nevertheless, deep learning remains a black box as it is hard to interpret which brain parts contribute significantly to the predictions. To tackle this challenge, we first trained a lightweight, fully CNN model for brain age estimation on a large sample data set (N = 3054, age range = [8,80 years]) and tested it on an independent data set (N = 555, age range = [8,80 years]). We then developed an interpretable scheme combining network occlusion sensitivity analysis (NOSA) with a fine-grained human brain atlas to uncover the learned invariance of the model. Our findings show that the dorsolateral, dorsomedial frontal cortex, anterior cingulate cortex, and thalamus had the highest contributions to age prediction across the lifespan. More interestingly, we observed that different regions showed divergent patterns in their predictions for specific age groups and that the bilateral hemispheres contributed differently to the predictions. Regions in the frontal lobe were essential predictors in both the developmental and aging stages, with the thalamus remaining relatively stable and saliently correlated with other regional changes throughout the lifespan. The lateral and medial temporal brain regions gradually became involved during the aging phase. At the network level, the frontoparietal and the default mode networks show an inverted U-shape contribution from the developmental to the aging stages. The framework could identify regional contributions to the brain age prediction model, which could help increase the model interpretability when serving as an aging biomarker.

Knowledge-Aware Multisite Adaptive Graph Transformer for Brain Disorder Diagnosis.

Song X, Shu K, Yang P, Zhao C, Zhou F, Frangi AF, Xiao X, Dong L, Wang T, Wang S, Lei B

pubmed logopapersJun 1 2025
Brain disorder diagnosis via resting-state functional magnetic resonance imaging (rs-fMRI) is usually limited due to the complex imaging features and sample size. For brain disorder diagnosis, the graph convolutional network (GCN) has achieved remarkable success by capturing interactions between individuals and the population. However, there are mainly three limitations: 1) The previous GCN approaches consider the non-imaging information in edge construction but ignore the sensitivity differences of features to non-imaging information. 2) The previous GCN approaches solely focus on establishing interactions between subjects (i.e., individuals and the population), disregarding the essential relationship between features. 3) Multisite data increase the sample size to help classifier training, but the inter-site heterogeneity limits the performance to some extent. This paper proposes a knowledge-aware multisite adaptive graph Transformer to address the above problems. First, we evaluate the sensitivity of features to each piece of non-imaging information, and then construct feature-sensitive and feature-insensitive subgraphs. Second, after fusing the above subgraphs, we integrate a Transformer module to capture the intrinsic relationship between features. Third, we design a domain adaptive GCN using multiple loss function terms to relieve data heterogeneity and to produce the final classification results. Last, the proposed framework is validated on two brain disorder diagnostic tasks. Experimental results show that the proposed framework can achieve state-of-the-art performance.

Score-Based Diffusion Models With Self-Supervised Learning for Accelerated 3D Multi-Contrast Cardiac MR Imaging.

Liu Y, Cui ZX, Qin S, Liu C, Zheng H, Wang H, Zhou Y, Liang D, Zhu Y

pubmed logopapersJun 1 2025
Long scan time significantly hinders the widespread applications of three-dimensional multi-contrast cardiac magnetic resonance (3D-MC-CMR) imaging. This study aims to accelerate 3D-MC-CMR acquisition by a novel method based on score-based diffusion models with self-supervised learning. Specifically, we first establish a mapping between the undersampled k-space measurements and the MR images, utilizing a self-supervised Bayesian reconstruction network. Secondly, we develop a joint score-based diffusion model on 3D-MC-CMR images to capture their inherent distribution. The 3D-MC-CMR images are finally reconstructed using the conditioned Langenvin Markov chain Monte Carlo sampling. This approach enables accurate reconstruction without fully sampled training data. Its performance was tested on the dataset acquired by a 3D joint myocardial $ \text {T}_{{1}}$ and $ \text {T}_{{1}\rho }$ mapping sequence. The $ \text {T}_{{1}}$ and $ \text {T}_{{1}\rho }$ maps were estimated via a dictionary matching method from the reconstructed images. Experimental results show that the proposed method outperforms traditional compressed sensing and existing self-supervised deep learning MRI reconstruction methods. It also achieves high quality $ \text {T}_{{1}}$ and $ \text {T}_{{1}\rho }$ parametric maps close to the reference maps, even at a high acceleration rate of 14.

A Foundation Model for Lesion Segmentation on Brain MRI With Mixture of Modality Experts.

Zhang X, Ou N, Doga Basaran B, Visentin M, Qiao M, Gu R, Matthews PM, Liu Y, Ye C, Bai W

pubmed logopapersJun 1 2025
Brain lesion segmentation is crucial for neurological disease research and diagnosis. As different types of lesions exhibit distinct characteristics on different imaging modalities, segmentation methods are typically developed in a task-specific manner, where each segmentation model is tailored to a specific lesion type and modality. However, the use of task-specific models requires predetermination of the lesion type and imaging modality, which complicates their deployment in real-world scenarios. In this work, we propose a universal foundation model for brain lesion segmentation on magnetic resonance imaging (MRI), which can automatically segment different types of brain lesions given input of various MRI modalities. We develop a novel Mixture of Modality Experts (MoME) framework with multiple expert networks attending to different imaging modalities. A hierarchical gating network is proposed to combine the expert predictions and foster expertise collaboration. Moreover, to avoid the degeneration of each expert network, we introduce a curriculum learning strategy during training to preserve the specialisation of each expert. In addition to MoME, to handle the combination of multiple input modalities, we propose MoME+, which uses a soft dispatch network for input modality routing. We evaluated the proposed method on nine brain lesion datasets, encompassing five imaging modalities and eight lesion types. The results show that our model outperforms state-of-the-art universal models for brain lesion segmentation and achieves promising generalisation performance onto unseen datasets.

RS-MAE: Region-State Masked Autoencoder for Neuropsychiatric Disorder Classifications Based on Resting-State fMRI.

Ma H, Xu Y, Tian L

pubmed logopapersJun 1 2025
Dynamic functional connectivity (DFC) extracted from resting-state functional magnetic resonance imaging (fMRI) has been widely used for neuropsychiatric disorder classifications. However, serious information redundancy within DFC matrices can significantly undermine the performance of classification models based on them. Moreover, traditional deep models cannot adapt well to connectivity-like data, and insufficient training samples further hinder their effective training. In this study, we proposed a novel region-state masked autoencoder (RS-MAE) for proficient representation learning based on DFC matrices and ultimately neuropsychiatric disorder classifications based on fMRI. Three strategies were taken to address the aforementioned limitations. First, masked autoencoder (MAE) was introduced to reduce redundancy within DFC matrices and learn effective representations of human brain function simultaneously. Second, region-state (RS) patch embedding was proposed to replace space-time patch embedding in video MAE to adapt to DFC matrices, in which only topological locality, rather than spatial locality, exists. Third, random state concatenation (RSC) was introduced as a DFC matrix augmentation approach, to alleviate the problem of training sample insufficiency. Neuropsychiatric disorder classifications were attained by fine-tuning the pretrained encoder included in RS-MAE. The performance of the proposed RS-MAE was evaluated on four publicly available datasets, achieving accuracies of 76.32%, 77.25%, 88.87%, and 76.53% for the attention deficit and hyperactivity disorder (ADHD), autism spectrum disorder (ASD), Alzheimer's disease (AD), and schizophrenia (SCZ) classification tasks, respectively. These results demonstrate the efficacy of the RS-MAE as a proficient deep learning model for neuropsychiatric disorder classifications.

Adaptive Breast MRI Scanning Using AI.

Eskreis-Winkler S, Bhowmik A, Kelly LH, Lo Gullo R, D'Alessio D, Belen K, Hogan MP, Saphier NB, Sevilimedu V, Sung JS, Comstock CE, Sutton EJ, Pinker K

pubmed logopapersJun 1 2025
Background MRI protocols typically involve many imaging sequences and often require too much time. Purpose To simulate artificial intelligence (AI)-directed stratified scanning for screening breast MRI with various triage thresholds and evaluate its diagnostic performance against that of the full breast MRI protocol. Materials and Methods This retrospective reader study included consecutive contrast-enhanced screening breast MRI examinations performed between January 2013 and January 2019 at three regional cancer sites. In this simulation study, an in-house AI tool generated a suspicion score for subtraction maximum intensity projection images during a given MRI examination, and the score was used to determine whether to proceed with the full MRI protocol or end the examination early (abbreviated breast MRI [AB-MRI] protocol). Examinations with suspicion scores under the 50th percentile were read using both the AB-MRI protocol (ie, dynamic contrast-enhanced MRI scans only) and the full MRI protocol. Diagnostic performance metrics for screening with various AI triage thresholds were compared with those for screening without AI triage. Results Of 863 women (mean age, 52 years ± 10 [SD]; 1423 MRI examinations), 51 received a cancer diagnosis within 12 months of screening. The diagnostic performance metrics for AI-directed stratified scanning that triaged 50% of examinations to AB-MRI versus full MRI protocol scanning were as follows: sensitivity, 88.2% (45 of 51; 95% CI: 79.4, 97.1) versus 86.3% (44 of 51; 95% CI: 76.8, 95.7); specificity, 80.8% (1108 of 1372; 95% CI: 78.7, 82.8) versus 81.4% (1117 of 1372; 95% CI: 79.4, 83.5); positive predictive value 3 (ie, percent of biopsies yielding cancer), 23.6% (43 of 182; 95% CI: 17.5, 29.8) versus 24.7% (42 of 170; 95% CI: 18.2, 31.2); cancer detection rate (per 1000 examinations), 31.6 (95% CI: 22.5, 40.7) versus 30.9 (95% CI: 21.9, 39.9); and interval cancer rate (per 1000 examinations), 4.2 (95% CI: 0.9, 7.6) versus 4.9 (95% CI: 1.3, 8.6). Specificity decreased by no more than 2.7 percentage points with AI triage. There were no AI-triaged examinations for which conducting the full MRI protocol would have resulted in additional cancer detection. Conclusion AI-directed stratified MRI decreased simulated scan times while maintaining diagnostic performance. © RSNA, 2025 <i>Supplemental material is available for this article.</i> See also the editorial by Strand in this issue.
Page 86 of 1201200 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.