Sort by:
Page 119 of 1241236 results

ABS-Mamba: SAM2-Driven Bidirectional Spiral Mamba Network for Medical Image Translation

Feng Yuan, Yifan Gao, Wenbin Wu, Keqing Wu, Xiaotong Guo, Jie Jiang, Xin Gao

arxiv logopreprintMay 12 2025
Accurate multi-modal medical image translation requires ha-rmonizing global anatomical semantics and local structural fidelity, a challenge complicated by intermodality information loss and structural distortion. We propose ABS-Mamba, a novel architecture integrating the Segment Anything Model 2 (SAM2) for organ-aware semantic representation, specialized convolutional neural networks (CNNs) for preserving modality-specific edge and texture details, and Mamba's selective state-space modeling for efficient long- and short-range feature dependencies. Structurally, our dual-resolution framework leverages SAM2's image encoder to capture organ-scale semantics from high-resolution inputs, while a parallel CNNs branch extracts fine-grained local features. The Robust Feature Fusion Network (RFFN) integrates these epresentations, and the Bidirectional Mamba Residual Network (BMRN) models spatial dependencies using spiral scanning and bidirectional state-space dynamics. A three-stage skip fusion decoder enhances edge and texture fidelity. We employ Efficient Low-Rank Adaptation (LoRA+) fine-tuning to enable precise domain specialization while maintaining the foundational capabilities of the pre-trained components. Extensive experimental validation on the SynthRAD2023 and BraTS2019 datasets demonstrates that ABS-Mamba outperforms state-of-the-art methods, delivering high-fidelity cross-modal synthesis that preserves anatomical semantics and structural details to enhance diagnostic accuracy in clinical applications. The code is available at https://github.com/gatina-yone/ABS-Mamba

Application of improved graph convolutional network for cortical surface parcellation.

Tan J, Ren X, Chen Y, Yuan X, Chang F, Yang R, Ma C, Chen X, Tian M, Chen W, Wang Z

pubmed logopapersMay 12 2025
Accurate cortical surface parcellation is essential for elucidating brain organizational principles, functional mechanisms, and the neural substrates underlying higher cognitive and emotional processes. However, the cortical surface is a highly folded complex geometry, and large regional variations make the analysis of surface data challenging. Current methods rely on geometric simplification, such as spherical expansion, which takes hours for spherical mapping and registration, a popular but costly process that does not take full advantage of inherent structural information. In this study, we propose an Attention-guided Deep Graph Convolutional network (ADGCN) for end-to-end parcellation on primitive cortical surface manifolds. ADGCN consists of a deep graph convolutional layer with a symmetrical U-shaped structure, which enables it to effectively transmit detailed information of the original brain map and learn the complex graph structure, help the network enhance feature extraction capability. What's more, we introduce the Squeeze and Excitation (SE) module, which enables the network to better capture key features, suppress unimportant features, and significantly improve parcellation performance with a small amount of computation. We evaluated the model on a public dataset of 100 artificially labeled brain surfaces. Compared with other methods, the proposed network achieves Dice coefficient of 88.53% and an accuracy of 90.27%. The network can segment the cortex directly in the original domain, and has the advantages of high efficiency, simple operation and strong interpretability. This approach facilitates the investigation of cortical changes during development, aging, and disease progression, with the potential to enhance the accuracy of neurological disease diagnosis and the objectivity of treatment efficacy evaluation.

MRI-Based Diagnostic Model for Alzheimer's Disease Using 3D-ResNet.

Chen D, Yang H, Li H, He X, Mu H

pubmed logopapersMay 12 2025
Alzheimer's disease (AD), a progressive neurodegenerative disorder, is the leading cause of dementia worldwide and remains incurable once it begins. Therefore, early and accurate diagnosis is essential for effective intervention. Leveraging recent advances in deep learning, this study proposes a novel diagnostic model based on the 3D-ResNet architecture to classify three cognitive states: AD, mild cognitive impairment (MCI), and cognitively normal (CN) individuals, using MRI data. The model integrates the strengths of ResNet and 3D convolutional neural networks (3D-CNN), and incorporates a special attention mechanism(SAM) within the residual structure to enhance feature representation. The study utilized the ADNI dataset, comprising 800 brain MRI scans. The dataset was split in a 7:3 ratio for training and testing, and the network was trained using data augmentation and cross-validation strategies. The proposed model achieved 92.33% accuracy in the three-class classification task, and 97.61%, 95.83%, and 93.42% accuracy in binary classifications of AD vs. CN, AD vs. MCI, and CN vs. MCI, respectively, outperforming existing state-of-the-art methods. Furthermore, Grad-CAM heatmaps and 3D MRI reconstructions revealed that the cerebral cortex and hippocampus are critical regions for AD classification. These findings demonstrate a robust and interpretable AI-based diagnostic framework for AD, providing valuable technical support for its timely detection and clinical intervention.

A Clinical Neuroimaging Platform for Rapid, Automated Lesion Detection and Personalized Post-Stroke Outcome Prediction

Brzus, M., Griffis, J. C., Riley, C. J., Bruss, J., Shea, C., Johnson, H. J., Boes, A. D.

medrxiv logopreprintMay 11 2025
Predicting long-term functional outcomes for individuals with stroke is a significant challenge. Solving this challenge will open new opportunities for improving stroke management by informing acute interventions and guiding personalized rehabilitation strategies. The location of the stroke is a key predictor of outcomes, yet no clinically deployed tools incorporate lesion location information for outcome prognostication. This study responds to this critical need by introducing a fully automated, three-stage neuroimaging processing and machine learning pipeline that predicts personalized outcomes from clinical imaging in adult ischemic stroke patients. In the first stage, our system automatically processes raw DICOM inputs, registers the brain to a standard template, and uses deep learning models to segment the stroke lesion. In the second stage, lesion location and automatically derived network features are input into statistical models trained to predict long-term impairments from a large independent cohort of lesion patients. In the third stage, a structured PDF report is generated using a large language model that describes the strokes location, the arterial distribution, and personalized prognostic information. We demonstrate the viability of this approach in a proof-of-concept application predicting select cognitive outcomes in a stroke cohort. Brain-behavior models were pre-trained to predict chronic impairment on 28 different cognitive outcomes in a large cohort of patients with focal brain lesions (N=604). The automated pipeline used these models to predict outcomes from clinically acquired MRIs in an independent ischemic stroke cohort (N=153). Starting from raw clinical DICOM images, we show that our pipeline can generate outcome predictions for individual patients in less than 3 minutes with 96% concordance relative to methods requiring manual processing. We also show that prediction accuracy is enhanced using models that incorporate lesion location, lesion-associated network information, and demographics. Our results provide a strong proof-of-concept and lay the groundwork for developing imaging-based clinical tools for stroke outcome prognostication.

Altered intrinsic ignition dynamics linked to Amyloid-β and tau pathology in Alzheimer's disease

Patow, G. A., Escrichs, A., Martinez-Molina, N., Ritter, P., Deco, G.

biorxiv logopreprintMay 11 2025
Alzheimer's disease (AD) progressively alters brain structure and function, yet the associated changes in large-scale brain network dynamics remain poorly understood. We applied the intrinsic ignition framework to resting-state functional MRI (rs-fMRI) data from AD patients, individuals with mild cognitive impairment (MCI), and cognitively healthy controls (HC) to elucidate how AD shapes intrinsic brain activity. We assessed node-metastability at the whole-brain level and in 7 canonical resting-state networks (RSNs). Our results revealed a progressive decline in dynamical complexity across the disease continuum. HC exhibited the highest node-metastability, whereas it was substantially reduced in MCI and AD patients. The cortical hierarchy of information processing was also disrupted, indicating that rich-club hubs may be selectively affected in AD progression. Furthermore, we used linear mixed-effects models to evaluate the influence of Amyloid-{beta} (A{beta}) and tau pathology on brain dynamics at both regional and whole-brain levels. We found significant associations between both protein burdens and alterations in node metastability. Lastly, a machine learning classifier trained on brain dynamics, A{beta}, and tau burden features achieved high accuracy in discriminating between disease stages. Together, our findings highlight the progressive disruption of intrinsic ignition across whole-brain and RSNs in AD and support the use of node-metastability in conjunction with proteinopathy as a novel framework for tracking disease progression.

Machine learning approaches for classifying major depressive disorder using biological and neuropsychological markers: A meta-analysis.

Zhang L, Jian L, Long Y, Ren Z, Calhoun VD, Passos IC, Tian X, Xiang Y

pubmed logopapersMay 10 2025
Traditional diagnostic methods for major depressive disorder (MDD), which rely on subjective assessments, may compromise diagnostic accuracy. In contrast, machine learning models have the potential to classify and diagnose MDD more effectively, reducing the risk of misdiagnosis associated with conventional methods. The aim of this meta-analysis is to evaluate the overall classification accuracy of machine learning models in MDD and examine the effects of machine learning algorithms, biomarkers, diagnostic comparison groups, validation procedures, and participant age on classification performance. As of September 2024, a total of 176 studies were ultimately included in the meta-analysis, encompassing a total of 60,926 participants. A random-effects model was applied to analyze the extracted data, resulting in an overall classification accuracy of 0.825 (95% CI [0.810; 0.839]). Convolutional neural networks significantly outperformed support vector machines (SVM) when using electroencephalography and magnetoencephalography data. Additionally, SVM demonstrated significantly better performance with functional magnetic resonance imaging data compared to graph neural networks and gaussian process classification. The sample size was negatively correlated to classification accuracy. Furthermore, evidence of publication bias was also detected. Therefore, while this study indicates that machine learning models show high accuracy in distinguishing MDD from healthy controls and other psychiatric disorders, further research is required before these findings can be generalized to large-scale clinical practice.

Artificial Intelligence in Vascular Neurology: Applications, Challenges, and a Review of AI Tools for Stroke Imaging, Clinical Decision Making, and Outcome Prediction Models.

Alqadi MM, Vidal SGM

pubmed logopapersMay 9 2025
Artificial intelligence (AI) promises to compress stroke treatment timelines, yet its clinical return on investment remains uncertain. We interrogate state‑of‑the‑art AI platforms across imaging, workflow orchestration, and outcome prediction to clarify value drivers and execution risks. Convolutional, recurrent, and transformer architectures now trigger large‑vessel‑occlusion alerts, delineate ischemic core in seconds, and forecast 90‑day function. Commercial deployments-RapidAI, Viz.ai, Aidoc-report double‑digit reductions in door‑to‑needle metrics and expanded thrombectomy eligibility. However, dataset bias, opaque reasoning, and limited external validation constrain scalability. Hybrid image‑plus‑clinical models elevate predictive accuracy but intensify data‑governance demands. AI can operationalize precision stroke care, but enterprise‑grade adoption requires federated data pipelines, explainable‑AI dashboards, and fit‑for‑purpose regulation. Prospective multicenter trials and continuous lifecycle surveillance are mandatory to convert algorithmic promise into reproducible, equitable patient benefit.

Deep learning for Parkinson's disease classification using multimodal and multi-sequences PET/MR images.

Chang Y, Liu J, Sun S, Chen T, Wang R

pubmed logopapersMay 9 2025
We aimed to use deep learning (DL) techniques to accurately differentiate Parkinson's disease (PD) from multiple system atrophy (MSA), which share similar clinical presentations. In this retrospective analysis, 206 patients who underwent PET/MR imaging at the Chinese PLA General Hospital were included, having been clinically diagnosed with either PD or MSA; an additional 38 healthy volunteers served as normal controls (NC). All subjects were randomly assigned to the training and test sets at a ratio of 7:3. The input to the model consists of 10 two-dimensional (2D) slices in axial, coronal, and sagittal planes from multi-modal images. A modified Residual Block Network with 18 layers (ResNet18) was trained with different modal images, to classify PD, MSA, and NC. A four-fold cross-validation method was applied in the training set. Performance evaluations included accuracy, precision, recall, F1 score, Receiver operating characteristic (ROC), and area under the ROC curve (AUC). Six single-modal models and seven multi-modal models were trained and tested. The PET models outperformed MRI models. The <sup>11</sup>C-methyl-N-2β-carbomethoxy-3β-(4-fluorophenyl)-tropanel (<sup>11</sup>C-CFT) -Apparent Diffusion Coefficient (ADC) model showed the best classification, which resulted in 0.97 accuracy, 0.93 precision, 0.95 recall, 0.92 F1, and 0.96 AUC. In the test set, the accuracy, precision, recall, and F1 score of the CFT-ADC model were 0.70, 0.73, 0.93, and 0.82, respectively. The proposed DL method shows potential as a high-performance assisting tool for the accurate diagnosis of PD and MSA. A multi-modal and multi-sequence model could further enhance the ability to classify PD.

Circulating Antioxidant Nutrients and Brain Age in Midlife Adults.

Lower MJ, DeCataldo MK, Kraynak TE, Gianaros PJ

pubmed logopapersMay 9 2025
Due to population aging, the increasing prevalence of Alzheimer's Disease (AD) and related dementias are major public health concerns. Dietary consumption of antioxidant nutrients, in particular the carotenoid β-carotene, has been associated with lower age-related neurocognitive decline. What is unclear, however, is the extent to which antioxidant nutrients may exert neuroprotective effects via their influence on established indicators of age-related changes in brain tissue. This study thus tested associations of circulating β-carotene and other nutrients with a structural neuroimaging indicator of brain age derived from cross-validated machine learning models trained to predict chronological age from brain tissue morphology in independent cohorts. Midlife adults (N=132, aged 30.4 to 50.8 years, 59 female at birth) underwent a structural magnetic resonance imaging (MRI) protocol and fasting phlebotomy to assess plasma concentrations of β-carotene, retinol, γ-tocopherol, ⍺-tocopherol, and β-cryptoxanthin. In regression analyses adjusting for chronological age, sex at birth, smoking status, MRI image quality, season of testing, annual income, and education, greater circulating levels of β-carotene were associated with a lower (i.e., younger) predicted brain age (β=-0.23, 95% CI=-0.40 to -0.07, P=0.006). Other nutrients were not statistically associated with brain age, and results persisted after additional covariate control for body mass index, cortical volume, and cortical thickness. These cross-sectional findings are consistent with the possibility that dietary intake of β-carotene may be associated with slower biological aging at the level of the brain, as reflected by a neuroimaging indicator of brain age.

Comparison between multimodal foundation models and radiologists for the diagnosis of challenging neuroradiology cases with text and images.

Le Guellec B, Bruge C, Chalhoub N, Chaton V, De Sousa E, Gaillandre Y, Hanafi R, Masy M, Vannod-Michel Q, Hamroun A, Kuchcinski G

pubmed logopapersMay 9 2025
The purpose of this study was to compare the ability of two multimodal models (GPT-4o and Gemini 1.5 Pro) with that of radiologists to generate differential diagnoses from textual context alone, key images alone, or a combination of both using complex neuroradiology cases. This retrospective study included neuroradiology cases from the "Diagnosis Please" series published in the Radiology journal between January 2008 and September 2024. The two multimodal models were asked to provide three differential diagnoses from textual context alone, key images alone, or the complete case. Six board-certified neuroradiologists solved the cases in the same setting, randomly assigned to two groups: context alone first and images alone first. Three radiologists solved the cases without, and then with the assistance of Gemini 1.5 Pro. An independent radiologist evaluated the quality of the image descriptions provided by GPT-4o and Gemini for each case. Differences in correct answers between multimodal models and radiologists were analyzed using McNemar test. GPT-4o and Gemini 1.5 Pro outperformed radiologists using clinical context alone (mean accuracy, 34.0 % [18/53] and 44.7 % [23.7/53] vs. 16.4 % [8.7/53]; both P < 0.01). Radiologists outperformed GPT-4o and Gemini 1.5 Pro using images alone (mean accuracy, 42.0 % [22.3/53] vs. 3.8 % [2/53], and 7.5 % [4/53]; both P < 0.01) and the complete cases (48.0 % [25.6/53] vs. 34.0 % [18/53], and 38.7 % [20.3/53]; both P < 0.001). While radiologists improved their accuracy when combining multimodal information (from 42.1 % [22.3/53] for images alone to 50.3 % [26.7/53] for complete cases; P < 0.01), GPT-4o and Gemini 1.5 Pro did not benefit from the multimodal context (from 34.0 % [18/53] for text alone to 35.2 % [18.7/53] for complete cases for GPT-4o; P = 0.48, and from 44.7 % [23.7/53] to 42.8 % [22.7/53] for Gemini 1.5 Pro; P = 0.54). Radiologists benefited significantly from the suggestion of Gemini 1.5 Pro, increasing their accuracy from 47.2 % [25/53] to 56.0 % [27/53] (P < 0.01). Both GPT-4o and Gemini 1.5 Pro correctly identified the imaging modality in 53/53 (100 %) and 51/53 (96.2 %) cases, respectively, but frequently failed to identify key imaging findings (43/53 cases [81.1 %] with incorrect identification of key imaging findings for GPT-4o and 50/53 [94.3 %] for Gemini 1.5). Radiologists show a specific ability to benefit from the integration of textual and visual information, whereas multimodal models mostly rely on the clinical context to suggest diagnoses.
Page 119 of 1241236 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.