Sort by:
Page 28 of 1621612 results

Resting-State Functional MRI: Current State, Controversies, Limitations, and Future Directions-<i>AJR</i> Expert Panel Narrative Review.

Vachha BA, Kumar VA, Pillai JJ, Shimony JS, Tanabe J, Sair HI

pubmed logopapersSep 3 2025
Resting-state functional MRI (rs-fMRI), a promising method for interrogating different brain functional networks from a single MRI acquisition, is increasingly used in clinical presurgical and other pretherapeutic brain mapping. However, challenges in standardization of acquisition, preprocessing, and analysis methods across centers and variability in results interpretation complicate its clinical use. Additionally, inherent problems regarding reliability of language lateralization, interpatient variability of cognitive network representation, dynamic aspects of intranetwork and internetwork connectivity, and effects of neurovascular uncoupling on network detection still must be overcome. Although deep learning solutions and further methodologic standardization will help address these issues, rs-fMRI remains generally considered an adjunct to task-based fMRI (tb-fMRI) for clinical presurgical mapping. Nonetheless, in many clinical instances, rs-fMRI may offer valuable additional information that supplements tb-fMRI, especially if tb-fMRI is inadequate due to patient performance or other limitations. Future growth in clinical applications of rs-fMRI is anticipated as challenges are increasingly addressed. This <i>AJR</i> Expert Panel Narrative Review summarizes the current state and emerging clinical utility of rs-fMRI, focusing on its role in presurgical mapping. Ongoing controversies and limitations in clinical applicability are presented and future directions are discussed, including the developing role of rs-fMRI in neuromodulation treatment of various neurologic disorders.

SynBT: High-quality Tumor Synthesis for Breast Tumor Segmentation by 3D Diffusion Model

Hongxu Yang, Edina Timko, Levente Lippenszky, Vanda Czipczer, Lehel Ferenczi

arxiv logopreprintSep 3 2025
Synthetic tumors in medical images offer controllable characteristics that facilitate the training of machine learning models, leading to an improved segmentation performance. However, the existing methods of tumor synthesis yield suboptimal performances when tumor occupies a large spatial volume, such as breast tumor segmentation in MRI with a large field-of-view (FOV), while commonly used tumor generation methods are based on small patches. In this paper, we propose a 3D medical diffusion model, called SynBT, to generate high-quality breast tumor (BT) in contrast-enhanced MRI images. The proposed model consists of a patch-to-volume autoencoder, which is able to compress the high-resolution MRIs into compact latent space, while preserving the resolution of volumes with large FOV. Using the obtained latent space feature vector, a mask-conditioned diffusion model is used to synthesize breast tumors within selected regions of breast tissue, resulting in realistic tumor appearances. We evaluated the proposed method for a tumor segmentation task, which demonstrated the proposed high-quality tumor synthesis method can facilitate the common segmentation models with performance improvement of 2-3% Dice Score on a large public dataset, and therefore provides benefits for tumor segmentation in MRI images.

AlzFormer: Video-based space-time attention model for early diagnosis of Alzheimer's disease.

Akan T, Akan S, Alp S, Ledbetter CR, Nobel Bhuiyan MA

pubmed logopapersSep 3 2025
Early and accurate Alzheimer's disease (AD) diagnosis is critical for effective intervention, but it is still challenging due to neurodegeneration's slow and complex progression. Recent studies in brain imaging analysis have highlighted the crucial roles of deep learning techniques in computer-assisted interventions for diagnosing brain diseases. In this study, we propose AlzFormer, a novel deep learning framework based on a space-time attention mechanism, for multiclass classification of AD, MCI, and CN individuals using structural MRI scans. Unlike conventional deep learning models, we used spatiotemporal self-attention to model inter-slice continuity by treating T1-weighted MRI volumes as sequential inputs, where slices correspond to video frames. Our model was fine-tuned and evaluated using 1.5 T MRI scans from the ADNI dataset. To ensure the anatomical consistency of all the MRI data, All MRI volumes were pre-processed with skull stripping and spatial normalization to MNI space. AlzFormer achieved an overall accuracy of 94 % on the test set, with balanced class-wise F1-scores (AD: 0.94, MCI: 0.99, CN: 0.98) and a macro-average AUC of 0.98. We also utilized attention map analysis to identify clinically significant patterns, particularly emphasizing subcortical structures and medial temporal regions implicated in AD. These findings demonstrate the potential of transformer-based architectures for robust and interpretable classification of brain disorders using structural MRI.

Stroke-Aware CycleGAN: Improving Low-Field MRI Image Quality for Accurate Stroke Assessment.

Zhou Y, Liu Z, Xie X, Li H, Zhu W, Zhang Z, Suo Y, Meng X, Cheng J, Xu H, Wang N, Wang Y, Zhang C, Xue B, Jing J, Wang Y, Liu T

pubmed logopapersSep 3 2025
Low-field portable magnetic resonance imaging (pMRI) devices address a crucial requirement in the realm of healthcare by offering the capability for on-demand and timely access to MRI, especially in the context of routine stroke emergency. Nevertheless, images acquired by these devices often exhibit poor clarity and low resolution, resulting in their reduced potential to support precise diagnostic evaluations and lesion quantification. In this paper, we propose a 3D deep learning based model, named Stroke-Aware CycleGAN (SA-CycleGAN), to enhance the quality of low-field images for further improving diagnosis of routine stroke. Firstly, based on traditional CycleGAN, SA-CycleGAN incorporates a prior of stroke lesions by applying a novel spatial feature transform mechanism. Secondly, gradient difference losses are combined to deal with the problem that the synthesized images tend to be overly smooth. We present a dataset comprising 101 paired high-field and low-field diffusion-weighted imaging (DWI), which were acquired through dual scans of the same patient in close temporal proximity. Our experiments demonstrate that SA-CycleGAN is capable of generating images with higher quality and greater clarity compared to the original low-field DWI. Additionally, in terms of quantifying stroke lesions, SA-CycleGAN outperforms existing methods. The lesion volume exhibits a strong correlation between the generated images and the high-field images, with R=0.852. In contrast, the lesion volume correlation between the low-field images and the high-field images is notably lower, with R=0.462. Furthermore, the mean absolute difference in lesion volumes between the generated images and high-field images (1.73±2.03 mL) was significantly smaller than the difference between the low-field images and high-field images (2.53±4.24 mL). It shows that the synthesized images not only exhibit superior visual clarity compared to the low-field acquired images, but also possess a high degree of consistency with high-field images. In routine clinical practice, the proposed SA-CycleGAN offers an accessible and cost-effective means of rapidly obtaining higher-quality images, holding the potential to enhance the efficiency and accuracy of stroke diagnosis in routine clinical settings. The code and trained models will be released on GitHub: SA-CycleGAN.

CINeMA: Conditional Implicit Neural Multi-Modal Atlas for a Spatio-Temporal Representation of the Perinatal Brain.

Dannecker M, Sideri-Lampretsa V, Starck S, Mihailov A, Milh M, Girard N, Auzias G, Rueckert D

pubmed logopapersSep 3 2025
Magnetic resonance imaging of fetal and neonatal brains reveals rapid neurodevelopment marked by substantial anatomical changes unfolding within days. Studying this critical stage of the developing human brain, therefore, requires accurate brain models-referred to as atlases-of high spatial and temporal resolution. To meet these demands, established traditional atlases and recently proposed deep learning-based methods rely on large and comprehensive datasets. This poses a major challenge for studying brains in the presence of pathologies for which data remains scarce. We address this limitation with CINeMA (Conditional Implicit Neural Multi-Modal Atlas), a novel framework for creating high-resolution, spatio-temporal, multimodal brain atlases, suitable for low-data settings. Unlike established methods, CINeMA operates in latent space, avoiding compute-intensive image registration and reducing atlas construction times from days to minutes. Furthermore, it enables flexible conditioning on anatomical features including gestational age, birth age, and pathologies like agenesis of the corpus callosum and ventriculomegaly of varying degree. CINeMA supports downstream tasks such as tissue segmentation and age prediction whereas its generative properties enable synthetic data creation and anatomically informed data augmentation. Surpassing state-of-the-art methods in accuracy, efficiency, and versatility, CINeMA represents a powerful tool for advancing brain research. We release the code and atlases at https://github.com/m-dannecker/CINeMA.

MRI-based deep learning radiomics in predicting histological differentiation of oropharyngeal cancer: a multicenter cohort study.

Pan Z, Lu W, Yu C, Fu S, Ling H, Liu Y, Zhang X, Gong L

pubmed logopapersSep 3 2025
The primary aim of this research was to create and rigorously assess a deep learning radiomics (DLR) framework utilizing magnetic resonance imaging (MRI) to forecast the histological differentiation grades of oropharyngeal cancer. This retrospective analysis encompassed 122 patients diagnosed with oropharyngeal cancer across three medical institutions in China. The participants were divided at random into two groups: a training cohort comprising 85 individuals and a test cohort of 37. Radiomics features derived from MRI scans, along with deep learning (DL) features, were meticulously extracted and carefully refined. These two sets of features were then integrated to build the DLR model, designed to assess the histological differentiation of oropharyngeal cancer. The model's predictive efficacy was gaged through the area under the receiver operating characteristic curve (AUC) and decision curve analysis (DCA). The DLR model demonstrated impressive performance, achieving strong AUC scores of 0.871 on the training cohort and 0.803 on the test cohort, outperforming both the standalone radiomics and DL models. Additionally, the DCA curve highlighted the significance of the DLR model in forecasting the histological differentiation of oropharyngeal cancer. The MRI-based DLR model demonstrated high predictive ability for histological differentiation of oropharyngeal cancer, which might be important for accurate preoperative diagnosis and clinical decision-making.

Evaluating large language model-generated brain MRI protocols: performance of GPT4o, o3-mini, DeepSeek-R1 and Qwen2.5-72B.

Kim SH, Schramm S, Schmitzer L, Serguen K, Ziegelmayer S, Busch F, Komenda A, Makowski MR, Adams LC, Bressem KK, Zimmer C, Kirschke J, Wiestler B, Hedderich D, Finck T, Bodden J

pubmed logopapersSep 3 2025
To evaluate the potential of LLMs to generate sequence-level brain MRI protocols. This retrospective study employed a dataset of 150 brain MRI cases derived from local imaging request forms. Reference protocols were established by two neuroradiologists. GPT-4o, o3-mini, DeepSeek-R1 and Qwen2.5-72B were employed to generate brain MRI protocols based on the case descriptions. Protocol generation was conducted (1) with additional in-context learning involving local standard protocols (enhanced) and (2) without additional information (base). Additionally, two radiology residents independently defined MRI protocols. The sum of redundant and missing sequences (accuracy index) was defined as performance metric. Accuracy indices were compared between groups using paired t-tests. The two neuroradiologists achieved substantial inter-rater agreement (Cohen's κ = 0.74). o3-mini demonstrated superior performance (base: 2.65 ± 1.61; enhanced: 1.94 ± 1.25), followed by GPT-4o (base: 3.11 ± 1.83; enhanced: 2.23 ± 1.48), DeepSeek-R1 (base: 3.42 ± 1.84; enhanced: 2.37 ± 1.42) and Qwen2.5-72B (base: 5.95 ± 2.78; enhanced: 2.75 ± 1.54). o3-mini consistently outperformed the other models with a significant margin. All four models showed highly significant performance improvements under the enhanced condition (adj. p < 0.001 for all models). The highest-performing LLM (o3-mini [enhanced]) yielded an accuracy index comparable to residents (o3-mini [enhanced]: 1.94 ± 1.25, resident 1: 1.77 ± 1.29, resident 2: 1.77 ± 1.28). Our findings demonstrate the promising potential of LLMs in automating brain MRI protocoling, especially when augmented through in-context learning. o3-mini exhibited superior performance, followed by GPT-4o. QuestionBrain MRI protocoling is a time-consuming, non-interpretative task, exacerbating radiologist workload. Findingso3-mini demonstrated superior brain MRI protocoling performance. All models showed notable improvements when augmented with local standard protocols. Clinical relevanceMRI protocoling is a time-intensive, non-interpretative task that adds to radiologist workload; large language models offer potential for (semi-)automation of this process.

RTGMFF: Enhanced fMRI-based Brain Disorder Diagnosis via ROI-driven Text Generation and Multimodal Feature Fusion

Junhao Jia, Yifei Sun, Yunyou Liu, Cheng Yang, Changmiao Wang, Feiwei Qin, Yong Peng, Wenwen Min

arxiv logopreprintSep 3 2025
Functional magnetic resonance imaging (fMRI) is a powerful tool for probing brain function, yet reliable clinical diagnosis is hampered by low signal-to-noise ratios, inter-subject variability, and the limited frequency awareness of prevailing CNN- and Transformer-based models. Moreover, most fMRI datasets lack textual annotations that could contextualize regional activation and connectivity patterns. We introduce RTGMFF, a framework that unifies automatic ROI-level text generation with multimodal feature fusion for brain-disorder diagnosis. RTGMFF consists of three components: (i) ROI-driven fMRI text generation deterministically condenses each subject's activation, connectivity, age, and sex into reproducible text tokens; (ii) Hybrid frequency-spatial encoder fuses a hierarchical wavelet-mamba branch with a cross-scale Transformer encoder to capture frequency-domain structure alongside long-range spatial dependencies; and (iii) Adaptive semantic alignment module embeds the ROI token sequence and visual features in a shared space, using a regularized cosine-similarity loss to narrow the modality gap. Extensive experiments on the ADHD-200 and ABIDE benchmarks show that RTGMFF surpasses current methods in diagnostic accuracy, achieving notable gains in sensitivity, specificity, and area under the ROC curve. Code is available at https://github.com/BeistMedAI/RTGMFF.

Edge-centric Brain Connectome Representations Reveal Increased Brain Functional Diversity of Reward Circuit in Patients with Major Depressive Disorder.

Qin K, Ai C, Zhu P, Xiang J, Chen X, Zhang L, Wang C, Zou L, Chen F, Pan X, Wang Y, Gu J, Pan N, Chen W

pubmed logopapersSep 3 2025
Major depressive disorder (MDD) has been increasingly understood as a disorder of network-level functional dysconnectivity. However, previous brain connectome studies have primarily relied on node-centric approaches, neglecting critical edge-edge interactions that may capture essential features of network dysfunction. This study included resting-state functional MRI data from 838 MDD patients and 881 healthy controls (HC) across 23 sites. We applied a novel edge-centric connectome model to estimate edge functional connectivity and identify overlapping network communities. Regional functional diversity was quantified via normalized entropy based on community overlap patterns. Neurobiological decoding was performed to map brain-wide relationships between functional diversity alterations and patterns of gene expression and neurotransmitter distribution. Comparative machine learning analyses further evaluated the diagnostic utility of edge-centric versus node-centric connectome representations. Compared with HC, MDD patients exhibited significantly increased functional diversity within the prefrontal-striatal-thalamic reward circuit. Neurobiological decoding analysis revealed that functional diversity alterations in MDD were spatially associated with transcriptional patterns enriched for inflammatory processes, as well as distribution of 5-HT1B receptors. Machine learning analyses demonstrated superior classification performance of edge-centric models over traditional node-centric approaches in distinguishing MDD patients from HC at the individual level. Our findings highlighted that abnormal functional diversity within the reward processing system might underlie multi-level neurobiological mechanisms of MDD. The edge-centric connectome approach offers a valuable tool for identifying disease biomarkers, characterizing individual variation and advancing current understanding of complex network configuration in psychiatric disorders.

Predicting Prognosis of Light-Chain Cardiac Amyloidosis by Magnetic Resonance Imaging and Deep Learning.

Wang S, Liu C, Guo Y, Sang H, Li X, Lin L, Li X, Wu Y, Zhang L, Tian J, Li J, Wang Y

pubmed logopapersSep 2 2025
Light-chain cardiac amyloidosis (AL-CA) is a progressive heart disease with high mortality rate and variable prognosis. Presently used Mayo staging method can only stratify patients into four stages, highlighting the necessity for a more individualized prognosis prediction method. We aim to develop a novel deep learning (DL) model for whole-heart analysis of cardiovascular magnetic resonance-derived late gadolinium enhancement (LGE) images to predict individualized prognosis in AL-CA. This study included 394 patients with AL-CA who underwent standardized chemotherapy and had at least one year of follow-up. The approach involved automated segmentation of heart in LGE images and feature extraction using a Transformer-based DL model. To enhance feature differentiation and mitigate overfitting, a contrastive pretraining strategy was employed to accentuate distinct features between patients with different prognosis while clustering similar cases. Finally, an ensemble learning strategy was used to integrate predictions from 15 models at 15 survival time points into a comprehensive prognostic model. In the testing set of 79 patients, the DL model achieved a C-Index of 0.91 and an AUC of 0.95 in predicting 2.6-year survival (HR: 2.67), outperforming the Mayo model (C-Index=0.65, AUC=0.71). The DL model effectively distinguished patients with the same Mayo stage but different prognosis. Visualization techniques revealed that the model captures complex, high-dimensional prognostic features across multiple cardiac regions, extending beyond the amyloid-affected areas. This fully automated DL model can predict individualized prognosis of AL-CA through LGE images, which complements the presently used Mayo staging method.
Page 28 of 1621612 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.