Sort by:
Page 98 of 1261251 results

A Survey of Surrogates and Health Care Professionals Indicates Support of Cognitive Motor Dissociation-Assisted Prognostication.

Heinonen GA, Carmona JC, Grobois L, Kruger LS, Velazquez A, Vrosgou A, Kansara VB, Shen Q, Egawa S, Cespedes L, Yazdi M, Bass D, Saavedra AB, Samano D, Ghoshal S, Roh D, Agarwal S, Park S, Alkhachroum A, Dugdale L, Claassen J

pubmed logopapersJun 1 2025
Prognostication of patients with acute disorders of consciousness is imprecise but more accurate technology-supported predictions, such as cognitive motor dissociation (CMD), are emerging. CMD refers to the detection of willful brain activation following motor commands using functional magnetic resonance imaging or machine learning-supported analysis of the electroencephalogram in clinically unresponsive patients. CMD is associated with long-term recovery, but acceptance by surrogates and health care professionals is uncertain. The objective of this study was to determine receptiveness for CMD to inform goals of care (GoC) decisions and research participation among health care professionals and surrogates of behaviorally unresponsive patients. This was a two-center study of surrogates of and health care professionals caring for unconscious patients with severe neurological injury who were enrolled in two prospective US-based studies. Participants completed a 13-item survey to assess demographics, religiosity, minimal acceptable level of recovery, enthusiasm for research participation, and receptiveness for CMD to support GoC decisions. Completed surveys were obtained from 196 participants (133 health care professionals and 63 surrogates). Across all respondents, 93% indicated that they would want their loved one or the patient they cared for to participate in a research study that supports recovery of consciousness if CMD were detected, compared to 58% if CMD were not detected. Health care professionals were more likely than surrogates to change GoC with a positive (78% vs. 59%, p = 0.005) or negative (83% vs. 59%, p = 0.0002) CMD result. Participants who reported religion was the most important part of their life were least likely to change GoC with or without CMD. Participants who identified as Black (odds ratio [OR] 0.12, 95% confidence interval [CI] 0.04-0.36) or Hispanic/Latino (OR 0.39, 95% CI 0.2-0.75) and those for whom religion was the most important part of their life (OR 0.18, 95% CI 0.05-0.64) were more likely to accept a lower minimum level of recovery. Technology-supported prognostication and enthusiasm for clinical trial participation was supported across a diverse spectrum of health care professionals and surrogate decision-makers. Education for surrogates and health care professionals should accompany integration of technology-supported prognostication.

Alzheimer's disease prediction using 3D-CNNs: Intelligent processing of neuroimaging data.

Rahman AU, Ali S, Saqia B, Halim Z, Al-Khasawneh MA, AlHammadi DA, Khan MZ, Ullah I, Alharbi M

pubmed logopapersJun 1 2025
Alzheimer's disease (AD) is a severe neurological illness that demolishes memory and brain functioning. This disease affects an individual's capacity to work, think, and behave. The proportion of individuals suffering from AD is rapidly increasing. It flatters a leading cause of disability and impacts millions of people worldwide. Early detection reduces disease expansion, provides more effective therapies, and leads to better results. However, predicting AD at an early stage is complex since its clinical symptoms match with normal aging, mild cognitive impairment (MCI), and neurodegenerative disorders. Prior studies indicate that early diagnosis is improved by the utilization of magnetic resonance imaging (MRI). However, MRI data is scarce, noisy, and extremely diverse among scanners and patient populations. The 2D CNNs analyze 3D data slices separately, resulting in a loss of inter-slice information and contextual coherence required to detect subtle and diffuse brain alterations. This study offered a novel 3Dimensional-Convolutional Neural Network (3D-CNN) and intelligent preprocessing pipeline for AD prediction. This work uses an intelligent frame selection and 3D dilated convolutions mechanism to recognize the most informative slices associated with AD disease. This enabled the model to capture subtle and diffuse structural changes across the brain visible in MRI scans. The proposed model examined brain structures by recognizing small volumetric changes associated with AD and acquiring spatial hierarchies within MRI data. After conducting various experiments, we observed that the proposed 3D-CNNs are highly proficient in capturing early brain changes. To validate the model's performance, a benchmark dataset called AD Neuroimaging Initiative (ADNI) is used and achieves a maximum accuracy of 92.89 %, outperforming state-of-the-art approaches.

Association of the characteristics of brain magnetic resonance imaging with genes related to disease onset in schizophrenia patients.

Lin J, Wang B, Chen S, Cao F, Zhang J, Lu Z

pubmed logopapersJun 1 2025
Schizophrenia (SCH) is a complex neurodevelopmental disorder, whose pathogenesis is not fully elucidated. This article aims to reveal disease-specific brain structural and functional changes and their potential genetic basis by analyzing the characteristics of brain magnetic resonance imaging (MRI) in SCH patients and related gene expression patterns. Differentially expressed genes (DEGs) between SCH and healthy control (NC) groups in the GSE48072 dataset were identified and functionally analyzed, and a protein-protein interaction (PPI) network was fabricated to screen for core genes (CGs). Meanwhile, MRI data from the COBRE, the Human Connectome Project (HCP), the 1000 Functional Connectomes Project (FCP), and the Consortium for Reliability and Reproducibility (CoRR) were utilized to explore differences in brain activity patterns between SCH patients and NC group using a 3D deep aggregation network (3D DANet) machine learning approach. A correlation analysis was performed between the identified CGs and MRI imaging characteristics. 82 DEGs were collected from the GSE48072 dataset, primarily involved in cytotoxic granules, growth factor binding, and graft-versus-host disease pathways. The construction of the PPI network revealed KLRD1, KLRF1, CD244, GZMH, GZMA, GZMB, PRF1, and SLAMF6 as CGs. SCH patients exhibited relatively enhanced activity patterns in the frontoparietal attention network (FAN) and default mode network (DMN) across four datasets, while showing a trend of weakening in most other networks. The 3D DANet demonstrated higher accuracy, specificity, and sensitivity in brain image classification. The correlation between enhancement of the DMN and genetic abnormalities was the strongest, followed by the enhancement of the frontal and parietal attention networks. In contrast, the correlation between the weakening of the sensory-motor network and occipital network and genetic abnormalities was relatively weak. The strongest correlation was observed between MRI characteristics and the KLRD1 and CD244 genes. The granzyme-mediated programmed cell death signaling pathway is related to pathogenesis of SCH, and CD244 may serve as potential biological markers for diagnosing SCH. The correlation between enhancement of the DMN and genetic abnormalities was the strongest, followed by the enhancement of the frontal and parietal attention networks. In contrast, the correlation between weakening of the sensory-motor network and occipital network and genetic abnormalities was relatively weak. Additionally, the strongest correlation was observed between MRI features and the KLRD1 and CD244 genes. The use of the 3D DANet method has improved the detection precision of brain structural and functional changes in SCH patients, providing a new perspective for understanding the biological basis of the disease.

Network Occlusion Sensitivity Analysis Identifies Regional Contributions to Brain Age Prediction.

He L, Wang S, Chen C, Wang Y, Fan Q, Chu C, Fan L, Xu J

pubmed logopapersJun 1 2025
Deep learning frameworks utilizing convolutional neural networks (CNNs) have frequently been used for brain age prediction and have achieved outstanding performance. Nevertheless, deep learning remains a black box as it is hard to interpret which brain parts contribute significantly to the predictions. To tackle this challenge, we first trained a lightweight, fully CNN model for brain age estimation on a large sample data set (N = 3054, age range = [8,80 years]) and tested it on an independent data set (N = 555, age range = [8,80 years]). We then developed an interpretable scheme combining network occlusion sensitivity analysis (NOSA) with a fine-grained human brain atlas to uncover the learned invariance of the model. Our findings show that the dorsolateral, dorsomedial frontal cortex, anterior cingulate cortex, and thalamus had the highest contributions to age prediction across the lifespan. More interestingly, we observed that different regions showed divergent patterns in their predictions for specific age groups and that the bilateral hemispheres contributed differently to the predictions. Regions in the frontal lobe were essential predictors in both the developmental and aging stages, with the thalamus remaining relatively stable and saliently correlated with other regional changes throughout the lifespan. The lateral and medial temporal brain regions gradually became involved during the aging phase. At the network level, the frontoparietal and the default mode networks show an inverted U-shape contribution from the developmental to the aging stages. The framework could identify regional contributions to the brain age prediction model, which could help increase the model interpretability when serving as an aging biomarker.

Knowledge-Aware Multisite Adaptive Graph Transformer for Brain Disorder Diagnosis.

Song X, Shu K, Yang P, Zhao C, Zhou F, Frangi AF, Xiao X, Dong L, Wang T, Wang S, Lei B

pubmed logopapersJun 1 2025
Brain disorder diagnosis via resting-state functional magnetic resonance imaging (rs-fMRI) is usually limited due to the complex imaging features and sample size. For brain disorder diagnosis, the graph convolutional network (GCN) has achieved remarkable success by capturing interactions between individuals and the population. However, there are mainly three limitations: 1) The previous GCN approaches consider the non-imaging information in edge construction but ignore the sensitivity differences of features to non-imaging information. 2) The previous GCN approaches solely focus on establishing interactions between subjects (i.e., individuals and the population), disregarding the essential relationship between features. 3) Multisite data increase the sample size to help classifier training, but the inter-site heterogeneity limits the performance to some extent. This paper proposes a knowledge-aware multisite adaptive graph Transformer to address the above problems. First, we evaluate the sensitivity of features to each piece of non-imaging information, and then construct feature-sensitive and feature-insensitive subgraphs. Second, after fusing the above subgraphs, we integrate a Transformer module to capture the intrinsic relationship between features. Third, we design a domain adaptive GCN using multiple loss function terms to relieve data heterogeneity and to produce the final classification results. Last, the proposed framework is validated on two brain disorder diagnostic tasks. Experimental results show that the proposed framework can achieve state-of-the-art performance.

Ultra-Sparse-View Cone-Beam CT Reconstruction-Based Strictly Structure-Preserved Deep Neural Network in Image-Guided Radiation Therapy.

Song Y, Zhang W, Wu T, Luo Y, Shi J, Yang X, Deng Z, Qi X, Li G, Bai S, Zhao J, Zhong R

pubmed logopapersJun 1 2025
Radiation therapy is regarded as the mainstay treatment for cancer in clinic. Kilovoltage cone-beam CT (CBCT) images have been acquired for most treatment sites as the clinical routine for image-guided radiation therapy (IGRT). However, repeated CBCT scanning brings extra irradiation dose to the patients and decreases clinical efficiency. Sparse CBCT scanning is a possible solution to the problems mentioned above but at the cost of inferior image quality. To decrease the extra dose while maintaining the CBCT quality, deep learning (DL) methods are widely adopted. In this study, planning CT was used as prior information, and the corresponding strictly structure-preserved CBCT was simulated based on the attenuation information from the planning CT. We developed a hyper-resolution ultra-sparse-view CBCT reconstruction model, known as the planning CT-based strictly-structure-preserved neural network (PSSP-NET), using a generative adversarial network (GAN). This model utilized clinical CBCT projections with extremely low sampling rates for the rapid reconstruction of high-quality CBCT images, and its clinical performance was evaluated in head-and-neck cancer patients. Our experiments demonstrated enhanced performance and improved reconstruction speed.

A Foundation Model for Lesion Segmentation on Brain MRI With Mixture of Modality Experts.

Zhang X, Ou N, Doga Basaran B, Visentin M, Qiao M, Gu R, Matthews PM, Liu Y, Ye C, Bai W

pubmed logopapersJun 1 2025
Brain lesion segmentation is crucial for neurological disease research and diagnosis. As different types of lesions exhibit distinct characteristics on different imaging modalities, segmentation methods are typically developed in a task-specific manner, where each segmentation model is tailored to a specific lesion type and modality. However, the use of task-specific models requires predetermination of the lesion type and imaging modality, which complicates their deployment in real-world scenarios. In this work, we propose a universal foundation model for brain lesion segmentation on magnetic resonance imaging (MRI), which can automatically segment different types of brain lesions given input of various MRI modalities. We develop a novel Mixture of Modality Experts (MoME) framework with multiple expert networks attending to different imaging modalities. A hierarchical gating network is proposed to combine the expert predictions and foster expertise collaboration. Moreover, to avoid the degeneration of each expert network, we introduce a curriculum learning strategy during training to preserve the specialisation of each expert. In addition to MoME, to handle the combination of multiple input modalities, we propose MoME+, which uses a soft dispatch network for input modality routing. We evaluated the proposed method on nine brain lesion datasets, encompassing five imaging modalities and eight lesion types. The results show that our model outperforms state-of-the-art universal models for brain lesion segmentation and achieves promising generalisation performance onto unseen datasets.

RS-MAE: Region-State Masked Autoencoder for Neuropsychiatric Disorder Classifications Based on Resting-State fMRI.

Ma H, Xu Y, Tian L

pubmed logopapersJun 1 2025
Dynamic functional connectivity (DFC) extracted from resting-state functional magnetic resonance imaging (fMRI) has been widely used for neuropsychiatric disorder classifications. However, serious information redundancy within DFC matrices can significantly undermine the performance of classification models based on them. Moreover, traditional deep models cannot adapt well to connectivity-like data, and insufficient training samples further hinder their effective training. In this study, we proposed a novel region-state masked autoencoder (RS-MAE) for proficient representation learning based on DFC matrices and ultimately neuropsychiatric disorder classifications based on fMRI. Three strategies were taken to address the aforementioned limitations. First, masked autoencoder (MAE) was introduced to reduce redundancy within DFC matrices and learn effective representations of human brain function simultaneously. Second, region-state (RS) patch embedding was proposed to replace space-time patch embedding in video MAE to adapt to DFC matrices, in which only topological locality, rather than spatial locality, exists. Third, random state concatenation (RSC) was introduced as a DFC matrix augmentation approach, to alleviate the problem of training sample insufficiency. Neuropsychiatric disorder classifications were attained by fine-tuning the pretrained encoder included in RS-MAE. The performance of the proposed RS-MAE was evaluated on four publicly available datasets, achieving accuracies of 76.32%, 77.25%, 88.87%, and 76.53% for the attention deficit and hyperactivity disorder (ADHD), autism spectrum disorder (ASD), Alzheimer's disease (AD), and schizophrenia (SCZ) classification tasks, respectively. These results demonstrate the efficacy of the RS-MAE as a proficient deep learning model for neuropsychiatric disorder classifications.

External validation and performance analysis of a deep learning-based model for the detection of intracranial hemorrhage.

Nada A, Sayed AA, Hamouda M, Tantawi M, Khan A, Alt A, Hassanein H, Sevim BC, Altes T, Gaballah A

pubmed logopapersJun 1 2025
PurposeWe aimed to investigate the external validation and performance of an FDA-approved deep learning model in labeling intracranial hemorrhage (ICH) cases on a real-world heterogeneous clinical dataset. Furthermore, we delved deeper into evaluating how patients' risk factors influenced the model's performance and gathered feedback on satisfaction from radiologists of varying ranks.MethodsThis prospective IRB approved study included 5600 non-contrast CT scans of the head in various clinical settings, that is, emergency, inpatient, and outpatient units. The patients' risk factors were collected and tested for impacting the performance of DL model utilizing univariate and multivariate regression analyses. The performance of DL model was contrasted to the radiologists' interpretation to determine the presence or absence of ICH with subsequent classification into subcategories of ICH. Key metrics, including accuracy, sensitivity, specificity, positive predictive value, and negative predictive value, were calculated. Receiver operating characteristics curve, along with the area under the curve, were determined. Additionally, a questionnaire was conducted with radiologists of varying ranks to assess their experience with the model.ResultsThe model exhibited outstanding performance, achieving a high sensitivity of 89% and specificity of 96%. Additional performance metrics, including positive predictive value (82%), negative predictive value (97%), and overall accuracy (94%), underscore its robust capabilities. The area under the ROC curve further demonstrated the model's efficacy, reaching 0.954. Multivariate logistic regression revealed statistical significance for age, sex, history of trauma, operative intervention, HTN, and smoking.ConclusionOur study highlights the satisfactory performance of the DL model on a diverse real-world dataset, garnering positive feedback from radiology trainees.

Predicting hemorrhagic transformation in acute ischemic stroke: a systematic review, meta-analysis, and methodological quality assessment of CT/MRI-based deep learning and radiomics models.

Salimi M, Vadipour P, Bahadori AR, Houshi S, Mirshamsi A, Fatemian H

pubmed logopapersJun 1 2025
Acute ischemic stroke (AIS) is a major cause of mortality and morbidity, with hemorrhagic transformation (HT) as a severe complication. Accurate prediction of HT is essential for optimizing treatment strategies. This review assesses the accuracy and utility of deep learning (DL) and radiomics in predicting HT through imaging, regarding clinical decision-making for AIS patients. A literature search was conducted across five databases (Pubmed, Scopus, Web of Science, Embase, IEEE) up to January 23, 2025. Studies involving DL or radiomics-based ML models for predicting HT in AIS patients were included. Data from training, validation, and clinical-combined models were extracted and analyzed separately. Pooled sensitivity, specificity, and AUC were calculated with a random-effects bivariate model. For the quality assessment of studies, the Methodological Radiomics Score (METRICS) and QUADAS-2 tool were used. 16 studies consisting of 3,083 individual participants were included in the meta-analysis. The pooled AUC for training cohorts was 0.87, sensitivity 0.80, and specificity 0.85. For validation cohorts, AUC was 0.87, sensitivity 0.81, and specificity 0.86. Clinical-combined models showed an AUC of 0.93, sensitivity 0.84, and specificity 0.89. Moderate to severe heterogeneity was noted and addressed. Deep-learning models outperformed radiomics models, while clinical-combined models outperformed deep learning-only and radiomics-only models. The average METRICS score was 62.85%. No publication bias was detected. DL and radiomics models showed great potential in predicting HT in AIS patients. However, addressing methodological issues-such as inconsistent reference standards and limited external validation-is essential for the clinical implementation of these models.
Page 98 of 1261251 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.