Sort by:
Page 20 of 1241236 results

CINeMA: Conditional Implicit Neural Multi-Modal Atlas for a Spatio-Temporal Representation of the Perinatal Brain.

Dannecker M, Sideri-Lampretsa V, Starck S, Mihailov A, Milh M, Girard N, Auzias G, Rueckert D

pubmed logopapersSep 3 2025
Magnetic resonance imaging of fetal and neonatal brains reveals rapid neurodevelopment marked by substantial anatomical changes unfolding within days. Studying this critical stage of the developing human brain, therefore, requires accurate brain models-referred to as atlases-of high spatial and temporal resolution. To meet these demands, established traditional atlases and recently proposed deep learning-based methods rely on large and comprehensive datasets. This poses a major challenge for studying brains in the presence of pathologies for which data remains scarce. We address this limitation with CINeMA (Conditional Implicit Neural Multi-Modal Atlas), a novel framework for creating high-resolution, spatio-temporal, multimodal brain atlases, suitable for low-data settings. Unlike established methods, CINeMA operates in latent space, avoiding compute-intensive image registration and reducing atlas construction times from days to minutes. Furthermore, it enables flexible conditioning on anatomical features including gestational age, birth age, and pathologies like agenesis of the corpus callosum and ventriculomegaly of varying degree. CINeMA supports downstream tasks such as tissue segmentation and age prediction whereas its generative properties enable synthetic data creation and anatomically informed data augmentation. Surpassing state-of-the-art methods in accuracy, efficiency, and versatility, CINeMA represents a powerful tool for advancing brain research. We release the code and atlases at https://github.com/m-dannecker/CINeMA.

Stroke-Aware CycleGAN: Improving Low-Field MRI Image Quality for Accurate Stroke Assessment.

Zhou Y, Liu Z, Xie X, Li H, Zhu W, Zhang Z, Suo Y, Meng X, Cheng J, Xu H, Wang N, Wang Y, Zhang C, Xue B, Jing J, Wang Y, Liu T

pubmed logopapersSep 3 2025
Low-field portable magnetic resonance imaging (pMRI) devices address a crucial requirement in the realm of healthcare by offering the capability for on-demand and timely access to MRI, especially in the context of routine stroke emergency. Nevertheless, images acquired by these devices often exhibit poor clarity and low resolution, resulting in their reduced potential to support precise diagnostic evaluations and lesion quantification. In this paper, we propose a 3D deep learning based model, named Stroke-Aware CycleGAN (SA-CycleGAN), to enhance the quality of low-field images for further improving diagnosis of routine stroke. Firstly, based on traditional CycleGAN, SA-CycleGAN incorporates a prior of stroke lesions by applying a novel spatial feature transform mechanism. Secondly, gradient difference losses are combined to deal with the problem that the synthesized images tend to be overly smooth. We present a dataset comprising 101 paired high-field and low-field diffusion-weighted imaging (DWI), which were acquired through dual scans of the same patient in close temporal proximity. Our experiments demonstrate that SA-CycleGAN is capable of generating images with higher quality and greater clarity compared to the original low-field DWI. Additionally, in terms of quantifying stroke lesions, SA-CycleGAN outperforms existing methods. The lesion volume exhibits a strong correlation between the generated images and the high-field images, with R=0.852. In contrast, the lesion volume correlation between the low-field images and the high-field images is notably lower, with R=0.462. Furthermore, the mean absolute difference in lesion volumes between the generated images and high-field images (1.73±2.03 mL) was significantly smaller than the difference between the low-field images and high-field images (2.53±4.24 mL). It shows that the synthesized images not only exhibit superior visual clarity compared to the low-field acquired images, but also possess a high degree of consistency with high-field images. In routine clinical practice, the proposed SA-CycleGAN offers an accessible and cost-effective means of rapidly obtaining higher-quality images, holding the potential to enhance the efficiency and accuracy of stroke diagnosis in routine clinical settings. The code and trained models will be released on GitHub: SA-CycleGAN.

Evaluating large language model-generated brain MRI protocols: performance of GPT4o, o3-mini, DeepSeek-R1 and Qwen2.5-72B.

Kim SH, Schramm S, Schmitzer L, Serguen K, Ziegelmayer S, Busch F, Komenda A, Makowski MR, Adams LC, Bressem KK, Zimmer C, Kirschke J, Wiestler B, Hedderich D, Finck T, Bodden J

pubmed logopapersSep 3 2025
To evaluate the potential of LLMs to generate sequence-level brain MRI protocols. This retrospective study employed a dataset of 150 brain MRI cases derived from local imaging request forms. Reference protocols were established by two neuroradiologists. GPT-4o, o3-mini, DeepSeek-R1 and Qwen2.5-72B were employed to generate brain MRI protocols based on the case descriptions. Protocol generation was conducted (1) with additional in-context learning involving local standard protocols (enhanced) and (2) without additional information (base). Additionally, two radiology residents independently defined MRI protocols. The sum of redundant and missing sequences (accuracy index) was defined as performance metric. Accuracy indices were compared between groups using paired t-tests. The two neuroradiologists achieved substantial inter-rater agreement (Cohen's κ = 0.74). o3-mini demonstrated superior performance (base: 2.65 ± 1.61; enhanced: 1.94 ± 1.25), followed by GPT-4o (base: 3.11 ± 1.83; enhanced: 2.23 ± 1.48), DeepSeek-R1 (base: 3.42 ± 1.84; enhanced: 2.37 ± 1.42) and Qwen2.5-72B (base: 5.95 ± 2.78; enhanced: 2.75 ± 1.54). o3-mini consistently outperformed the other models with a significant margin. All four models showed highly significant performance improvements under the enhanced condition (adj. p < 0.001 for all models). The highest-performing LLM (o3-mini [enhanced]) yielded an accuracy index comparable to residents (o3-mini [enhanced]: 1.94 ± 1.25, resident 1: 1.77 ± 1.29, resident 2: 1.77 ± 1.28). Our findings demonstrate the promising potential of LLMs in automating brain MRI protocoling, especially when augmented through in-context learning. o3-mini exhibited superior performance, followed by GPT-4o. QuestionBrain MRI protocoling is a time-consuming, non-interpretative task, exacerbating radiologist workload. Findingso3-mini demonstrated superior brain MRI protocoling performance. All models showed notable improvements when augmented with local standard protocols. Clinical relevanceMRI protocoling is a time-intensive, non-interpretative task that adds to radiologist workload; large language models offer potential for (semi-)automation of this process.

RTGMFF: Enhanced fMRI-based Brain Disorder Diagnosis via ROI-driven Text Generation and Multimodal Feature Fusion

Junhao Jia, Yifei Sun, Yunyou Liu, Cheng Yang, Changmiao Wang, Feiwei Qin, Yong Peng, Wenwen Min

arxiv logopreprintSep 3 2025
Functional magnetic resonance imaging (fMRI) is a powerful tool for probing brain function, yet reliable clinical diagnosis is hampered by low signal-to-noise ratios, inter-subject variability, and the limited frequency awareness of prevailing CNN- and Transformer-based models. Moreover, most fMRI datasets lack textual annotations that could contextualize regional activation and connectivity patterns. We introduce RTGMFF, a framework that unifies automatic ROI-level text generation with multimodal feature fusion for brain-disorder diagnosis. RTGMFF consists of three components: (i) ROI-driven fMRI text generation deterministically condenses each subject's activation, connectivity, age, and sex into reproducible text tokens; (ii) Hybrid frequency-spatial encoder fuses a hierarchical wavelet-mamba branch with a cross-scale Transformer encoder to capture frequency-domain structure alongside long-range spatial dependencies; and (iii) Adaptive semantic alignment module embeds the ROI token sequence and visual features in a shared space, using a regularized cosine-similarity loss to narrow the modality gap. Extensive experiments on the ADHD-200 and ABIDE benchmarks show that RTGMFF surpasses current methods in diagnostic accuracy, achieving notable gains in sensitivity, specificity, and area under the ROC curve. Code is available at https://github.com/BeistMedAI/RTGMFF.

Temporally-Aware Diffusion Model for Brain Progression Modelling with Bidirectional Temporal Regularisation

Mattia Litrico, Francesco Guarnera, Mario Valerio Giuffrida, Daniele Ravì, Sebastiano Battiato

arxiv logopreprintSep 3 2025
Generating realistic MRIs to accurately predict future changes in the structure of brain is an invaluable tool for clinicians in assessing clinical outcomes and analysing the disease progression at the patient level. However, current existing methods present some limitations: (i) some approaches fail to explicitly capture the relationship between structural changes and time intervals, especially when trained on age-imbalanced datasets; (ii) others rely only on scan interpolation, which lack clinical utility, as they generate intermediate images between timepoints rather than future pathological progression; and (iii) most approaches rely on 2D slice-based architectures, thereby disregarding full 3D anatomical context, which is essential for accurate longitudinal predictions. We propose a 3D Temporally-Aware Diffusion Model (TADM-3D), which accurately predicts brain progression on MRI volumes. To better model the relationship between time interval and brain changes, TADM-3D uses a pre-trained Brain-Age Estimator (BAE) that guides the diffusion model in the generation of MRIs that accurately reflect the expected age difference between baseline and generated follow-up scans. Additionally, to further improve the temporal awareness of TADM-3D, we propose the Back-In-Time Regularisation (BITR), by training TADM-3D to predict bidirectionally from the baseline to follow-up (forward), as well as from the follow-up to baseline (backward). Although predicting past scans has limited clinical applications, this regularisation helps the model generate temporally more accurate scans. We train and evaluate TADM-3D on the OASIS-3 dataset, and we validate the generalisation performance on an external test set from the NACC dataset. The code will be available upon acceptance.

Resting-State Functional MRI: Current State, Controversies, Limitations, and Future Directions-<i>AJR</i> Expert Panel Narrative Review.

Vachha BA, Kumar VA, Pillai JJ, Shimony JS, Tanabe J, Sair HI

pubmed logopapersSep 3 2025
Resting-state functional MRI (rs-fMRI), a promising method for interrogating different brain functional networks from a single MRI acquisition, is increasingly used in clinical presurgical and other pretherapeutic brain mapping. However, challenges in standardization of acquisition, preprocessing, and analysis methods across centers and variability in results interpretation complicate its clinical use. Additionally, inherent problems regarding reliability of language lateralization, interpatient variability of cognitive network representation, dynamic aspects of intranetwork and internetwork connectivity, and effects of neurovascular uncoupling on network detection still must be overcome. Although deep learning solutions and further methodologic standardization will help address these issues, rs-fMRI remains generally considered an adjunct to task-based fMRI (tb-fMRI) for clinical presurgical mapping. Nonetheless, in many clinical instances, rs-fMRI may offer valuable additional information that supplements tb-fMRI, especially if tb-fMRI is inadequate due to patient performance or other limitations. Future growth in clinical applications of rs-fMRI is anticipated as challenges are increasingly addressed. This <i>AJR</i> Expert Panel Narrative Review summarizes the current state and emerging clinical utility of rs-fMRI, focusing on its role in presurgical mapping. Ongoing controversies and limitations in clinical applicability are presented and future directions are discussed, including the developing role of rs-fMRI in neuromodulation treatment of various neurologic disorders.

Edge-centric Brain Connectome Representations Reveal Increased Brain Functional Diversity of Reward Circuit in Patients with Major Depressive Disorder.

Qin K, Ai C, Zhu P, Xiang J, Chen X, Zhang L, Wang C, Zou L, Chen F, Pan X, Wang Y, Gu J, Pan N, Chen W

pubmed logopapersSep 3 2025
Major depressive disorder (MDD) has been increasingly understood as a disorder of network-level functional dysconnectivity. However, previous brain connectome studies have primarily relied on node-centric approaches, neglecting critical edge-edge interactions that may capture essential features of network dysfunction. This study included resting-state functional MRI data from 838 MDD patients and 881 healthy controls (HC) across 23 sites. We applied a novel edge-centric connectome model to estimate edge functional connectivity and identify overlapping network communities. Regional functional diversity was quantified via normalized entropy based on community overlap patterns. Neurobiological decoding was performed to map brain-wide relationships between functional diversity alterations and patterns of gene expression and neurotransmitter distribution. Comparative machine learning analyses further evaluated the diagnostic utility of edge-centric versus node-centric connectome representations. Compared with HC, MDD patients exhibited significantly increased functional diversity within the prefrontal-striatal-thalamic reward circuit. Neurobiological decoding analysis revealed that functional diversity alterations in MDD were spatially associated with transcriptional patterns enriched for inflammatory processes, as well as distribution of 5-HT1B receptors. Machine learning analyses demonstrated superior classification performance of edge-centric models over traditional node-centric approaches in distinguishing MDD patients from HC at the individual level. Our findings highlighted that abnormal functional diversity within the reward processing system might underlie multi-level neurobiological mechanisms of MDD. The edge-centric connectome approach offers a valuable tool for identifying disease biomarkers, characterizing individual variation and advancing current understanding of complex network configuration in psychiatric disorders.

AlzFormer: Video-based space-time attention model for early diagnosis of Alzheimer's disease.

Akan T, Akan S, Alp S, Ledbetter CR, Nobel Bhuiyan MA

pubmed logopapersSep 3 2025
Early and accurate Alzheimer's disease (AD) diagnosis is critical for effective intervention, but it is still challenging due to neurodegeneration's slow and complex progression. Recent studies in brain imaging analysis have highlighted the crucial roles of deep learning techniques in computer-assisted interventions for diagnosing brain diseases. In this study, we propose AlzFormer, a novel deep learning framework based on a space-time attention mechanism, for multiclass classification of AD, MCI, and CN individuals using structural MRI scans. Unlike conventional deep learning models, we used spatiotemporal self-attention to model inter-slice continuity by treating T1-weighted MRI volumes as sequential inputs, where slices correspond to video frames. Our model was fine-tuned and evaluated using 1.5 T MRI scans from the ADNI dataset. To ensure the anatomical consistency of all the MRI data, All MRI volumes were pre-processed with skull stripping and spatial normalization to MNI space. AlzFormer achieved an overall accuracy of 94 % on the test set, with balanced class-wise F1-scores (AD: 0.94, MCI: 0.99, CN: 0.98) and a macro-average AUC of 0.98. We also utilized attention map analysis to identify clinically significant patterns, particularly emphasizing subcortical structures and medial temporal regions implicated in AD. These findings demonstrate the potential of transformer-based architectures for robust and interpretable classification of brain disorders using structural MRI.

An Artificial Intelligence System for Staging the Spheno-Occipital Synchondrosis.

Milani OH, Mills L, Nikho A, Tliba M, Ayyildiz H, Allareddy V, Ansari R, Cetin AE, Elnagar MH

pubmed logopapersSep 2 2025
The aim of this study was to develop, test and validate automated interpretable deep learning algorithms for the assessment and classification of the spheno-occipital synchondrosis (SOS) fusion stages from a cone beam computed tomography (CBCT). The sample consisted of 723 CBCT scans of orthodontic patients from private practices in the midwestern United States. The SOS fusion stages were classified by two orthodontists and an oral and maxillofacial radiologist. The advanced deep learning models employed consisted of ResNet, EfficientNet and ConvNeXt. Additionally, a new attention-based model, ConvNeXt + Conv Attention, was developed to enhance classification accuracy by integrating attention mechanisms for capturing subtle medical imaging features. Laslty, YOLOv11 was integrated for fully-automated region detection and segmentation. ConvNeXt + Conv Attention outperformed the other models and achieved a 88.94% accuracy with manual cropping and 82.49% accuracy in a fully automated workflow. This study introduces a novel artificial intelligence-based pipeline that reliably automates the classification of the SOS fusion stages using advanced deep learning models, with the highest accuracy achieved by ConvNext + Conv Attention. These models enhance the efficiency, scalability and consistency of SOS staging while minimising manual intervention from the clinician, underscoring the potential for AI-driven solutions in orthodontics and clinical workflows.

Diffusion-QSM: diffusion model with timetravel and resampling refinement for quantitative susceptibility mapping.

Zhang M, Liu C, Zhang Y, Wei H

pubmed logopapersSep 2 2025
Quantitative susceptibility mapping (QSM) is a useful magnetic resonance imaging technique. We aim to propose a deep learning (DL)-based method for QSM reconstruction that is robust to data perturbations. We developed Diffusion-QSM, a diffusion model-based method with a time-travel and resampling refinement module for high-quality QSM reconstruction. First, the diffusion prior is trained unconditionally on high-quality QSM images, without requiring explicit information about the measured tissue phase, thereby enhancing generalization performance. Subsequently, during inference, the physical constraints from the QSM forward model and measurement are integrated into the output of the diffusion model to guide the sampling process toward realistic image representations. In addition, a time-travel and resampling module is employed during the later sampling stage to refine the image quality, resulting in an improved reconstruction without significantly prolonging the time. Experimental results show that Diffusion-QSM outperforms traditional and unsupervised DL methods for QSM reconstruction using simulation, in vivo and ex vivo data and shows better generalization capability than supervised DL methods when processing out-of-distribution data. Diffusion-QSM successfully unifies data-driven diffusion priors and subjectspecific physics constraints, enabling generalizable, high-quality QSM reconstruction under diverse perturbations, including image contrast, resolution and scan direction. This work advances QSM reconstruction by bridging the generalization gap in deep learning. The excellent quality and generalization capability underscore its potential for various realistic applications.
Page 20 of 1241236 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.