Sort by:
Page 13 of 1601593 results

Synthetizing SWI from 3T to 7T by generative diffusion network for deep medullary veins visualization.

Li S, Deng X, Li Q, Zhen Z, Han L, Chen K, Zhou C, Chen F, Huang P, Zhang R, Chen H, Zhang T, Chen W, Tan T, Liu C

pubmed logopapersSep 19 2025
Ultrahigh-field susceptibility-weighted imaging (SWI) provides excellent tissue contrast and anatomical details of brain. However, ultrahigh-field magnetic resonance (MR) scanner often expensive and provides uncomfortable noise experience for patient. Therefore, some deep learning approaches have been proposed to synthesis high-field MR images from low-filed MR images, most existing methods rely on generative adversarial network (GAN) and achieve acceptable results. While the dilemma in train process of GAN, generally recognized, limits the synthesis performance in SWI images for its microvascular structure. Diffusion models, as a promising alternative, indirectly characterize the gaussian noise to the target image with a slow sampling through a considerable number of steps. To address this limitation, we presented a generative diffusion-based deep learning imaging model, named conditional denoising diffusion probabilistic model (CDDPM), for synthesizing high-field (7 Tesla) SWI images form low-field (3 Tesla) SWI images and assess clinical applicability. Crucially, the experiment results demonstrate that the diffusion-based model that synthesizes 7T SWI from 3T SWI images is potentially to providing an alternative way to achieve the advantages of ultra-high field 7T MR images for deep medullary veins visualization.

Deep learning-based acceleration and denoising of 0.55T MRI for enhanced conspicuity of vestibular Schwannoma post contrast administration.

Hinsen M, Nagel A, Heiss R, May M, Wiesmueller M, Mathy C, Zeilinger M, Hornung J, Mueller S, Uder M, Kopp M

pubmed logopapersSep 19 2025
Deep-learning (DL) based MRI denoising techniques promise improved image quality and shorter examination times. This advancement is particularly beneficial for 0.55T MRI, where the inherently lower signal-to-noise (SNR) ratio can compromise image quality. Sufficient SNR is crucial for the reliable detection of vestibular schwannoma (VS). The objective of this study is to evaluate the VS conspicuity and acquisition time (TA) of 0.55T MRI examinations with contrast agents using a DL-denoising algorithm. From January 2024 to October 2024, we retrospectively included 30 patients with VS (9 women). We acquired a clinical reference protocol of the cerebellopontine angle containing a T1w fat-saturated (fs) axial (number of signal averages [NSA] 4) and a T1w Spectral Attenuated Inversion Recovery (SPAIR) coronal (NSA 2) sequence after contrast agent (CA) application without advanced DL-based denoising (w/o DL). We reconstructed the T1w fs CA sequence axial and the T1w SPAIR CA coronal with full DL-denoising mode without change of NSA, and secondly with 1 NSA for T1w fs CA axial and T1w SPAIR coronal (DL&1NSA). Each sequence was rated on a 5-point Likert scale (1: insufficient, 3: moderate, clinically sufficient; 5: perfect) for: overall image quality; VS conspicuity, and artifacts. Secondly, we analyzed the reliability of the size measurements. Two radiologists specializing in head and neck imaging performed the reading and measurements. The Wilcoxon Signed-Rank Test was used for non-parametric statistical comparison. The DL&4NSA axial/coronal study sequence achieved the highest overall IQ (median 4.9). The image quality (IQ) for DL&1NSA was higher (M: 4.0) than for the reference sequence (w/o DL; median 4.0 versus 3.5, each p < 0.01). Similarly, the VS conspicuity was best for DL&4NSA (M: 4.9), decreased for DL&1NSA (M: 4.1), and was lower but still sufficient for w/o DL (M: 3.7, each p < 0.01). The TA for the axial and coronal post-contrast sequences was 8:59 minutes for DL&4NSA and w/o DL and decreased to 3:24 minutes with DL&1NSA. This study underlines that advanced DL-based denoising techniques can reduce the examination time by more than half while simultaneously improving image quality.

Lightweight Transfer Learning Models for Multi-Class Brain Tumor Classification: Glioma, Meningioma, Pituitary Tumors, and No Tumor MRI Screening.

Gorenshtein A, Liba T, Goren A

pubmed logopapersSep 19 2025
Glioma, pituitary tumors, and meningiomas constitute the major types of primary brain tumors. The challenge in achieving a definitive diagnosis stem from the brain's complex structure, limited accessibility for precise imaging, and the resemblance between different types of tumors. An alternative and promising solution is the application of artificial intelligence (AI), specifically through deep learning models. We developed multiple lightweight deep learning models ResNet-18 (both pretrained on ImageNet and trained from scratch), ResNet-34, ResNet-50, and a custom CNN to classify glioma, meningioma, pituitary tumor, and no tumor MRI scans. A dataset of 7023 images was employed, split into 5712 for training and 1311 for validation. Each model was evaluated via accuracy, area under the curve (AUC), sensitivity, specificity, and confusion matrices. We compared our models to SOTA methods such as SAlexNet and TumorGANet, highlighting computational efficiency and classification performance. ResNet pretrained achieved 98.5-99.2% accuracy and near-perfect validation metrics, with an overall AUC of 1.0 and average sensitivity and specificity both exceeding 97% across the four classes. In comparison, ResNet-18 trained from scratch and the custom CNN achieved 91.99% and 87.03% accuracy, respectively, with AUCs ranging from 0.94 to 1.00. Error analysis revealed moderate misclassification of meningiomas as gliomas in non-pretrained models. Learning rate optimization facilitated stable convergence, and loss metrics indicated effective generalization with minimal overfitting. Our findings confirm that a moderately sized, transfer-learned network (ResNet-18) can deliver high diagnostic accuracy and robust performance for four-class brain tumor classification. This approach aligns with the goal of providing efficient, accurate, and easily deployable AI solutions, particularly for smaller clinical centers with limited computational resources. Future studies should incorporate multi-sequence MRI and extended patient cohorts to further validate these promising results.

TractoTransformer: Diffusion MRI Streamline Tractography using CNN and Transformer Networks

Itzik Waizman, Yakov Gusakov, Itay Benou, Tammy Riklin Raviv

arxiv logopreprintSep 19 2025
White matter tractography is an advanced neuroimaging technique that reconstructs the 3D white matter pathways of the brain from diffusion MRI data. It can be framed as a pathfinding problem aiming to infer neural fiber trajectories from noisy and ambiguous measurements, facing challenges such as crossing, merging, and fanning white-matter configurations. In this paper, we propose a novel tractography method that leverages Transformers to model the sequential nature of white matter streamlines, enabling the prediction of fiber directions by integrating both the trajectory context and current diffusion MRI measurements. To incorporate spatial information, we utilize CNNs that extract microstructural features from local neighborhoods around each voxel. By combining these complementary sources of information, our approach improves the precision and completeness of neural pathway mapping compared to traditional tractography models. We evaluate our method with the Tractometer toolkit, achieving competitive performance against state-of-the-art approaches, and present qualitative results on the TractoInferno dataset, demonstrating strong generalization to real-world data.

Rapid and robust quantitative cartilage assessment for the clinical setting: deep learning-enhanced accelerated T2 mapping.

Carretero-Gómez L, Wiesinger F, Fung M, Nunes B, Pedoia V, Majumdar S, Desai AD, Gatti A, Chaudhari A, Sánchez-Lacalle E, Malpica N, Padrón M

pubmed logopapersSep 18 2025
Clinical adoption of T2 mapping is limited by poor reproducibility, lengthy examination times, and cumbersome image analysis. This study aimed to develop an accelerated deep learning (DL)-enhanced cartilage T2 mapping sequence (DL CartiGram), demonstrate its repeatability and reproducibility, and evaluate its accuracy compared to conventional T2 mapping using a semi-automatic pipeline. DL CartiGram was implemented using a modified 2D Multi-Echo Spin-Echo sequence at 3 T, incorporating parallel imaging and DL-based image reconstruction. Phantom tests were performed at two sites to obtain test-retest T2 maps, using single-echo spin-echo (SE) measurements as reference values. At one site, DL CartiGram and conventional T2 mapping were performed on 43 patients. T2 values were extracted from 52 patellar and femoral compartments using DL knee segmentation and the DOSMA framework. Repeatability and reproducibility were assessed using coefficients of variation (CV), Bland-Altman analysis, and concordance correlation coefficients (CCC). T2 differences were evaluated with Wilcoxon signed-rank tests, paired t tests, and accuracy CV. Phantom tests showed intra-site repeatability with CVs ≤ 2.52% and T2 precision ≤ 1 ms. Inter-site reproducibility showed a CV of 2.74% and a CCC of 99% (CI 92-100%). Bland-Altman analysis showed a bias of 1.56 ms between sites (p = 0.03), likely due to temperature effects. In vivo, DL CartiGram reduced scan time by 40%, yielding accurate cartilage T2 measurements (CV = 0.97%) with no significant differences compared to conventional T2 mapping (p = 0.1). DL CartiGram significantly accelerates T2 mapping, while still assuring excellent repeatability and reproducibility. Combined with the semi-automatic post-processing pipeline, it emerges as a promising tool for quantitative T2 cartilage biomarker assessment in clinical settings.

MRI on a Budget: Leveraging Low and Ultra-Low Intensity Technology in Africa.

Ussi KK, Mtenga RB

pubmed logopapersSep 18 2025
Magnetic resonance imaging (MRI) is a cornerstone of brain and spine diagnostics. Yet, access across Africa is limited by high installation costs, power requirements, and the need for specialized shielding and facilities. Low-and ultra low-field (ULF) MRI systems operating below 0.3 T are emerging as a practical alternative to expand neuroimaging capacity in resource-constrained settings. However, its faced with challenges that hinder its use in clinical setting. Technological advances that seek to tackle these challenges such as permanent Halbach array magnets, portable scanner designs such as those successfully deployed in Uganda and Malawi, and deep learning methods including convolutional neural network electromagnetic interference cancellation and residual U-Net image reconstruction have improved image quality and reduced noise, making ULF MRI increasingly viable. We review the state of low-field MRI technology, its application in point-of-care and rural contexts, and the specific limitations that remain, including reduced signal-to-noise ratio, larger voxel size requirements, and susceptibility to motion artifacts. Although not a replacement for high-field scanners in detecting subtle or small lesions, low-field MRI offers a promising pathway to broaden diagnostic imaging availability, support clinical decision-making, and advance equitable neuroimaging research in under-resourced regions.ABBREVIATIONS: CNN=Convolutional neural network; EMI=Electromagnetic interference; FID=Free induction wave; LMIC=Low and middle income countries; MRI=Magnetic Resonance Imaging; NCDs=Non communicable diseases; RF=Radiofrequency Pulse; SNR= Signal to noise ratio; TBI=Traumatic brain Injury.

Mamba-Enhanced Diffusion Model for Perception-Aware Blind Super-Resolution of Magnetic Resonance Imaging.

Zhao X, Yang X, Song Z

pubmed logopapersSep 18 2025
High-resolution magnetic resonance imaging (HR MRI) can provide accurate and rich information for doctors to better detect subtle lesions, delineate tumor boundaries, evaluate small anatomical structures, and assess early-stage pathological changes that might be obscured in lower resolution images. However, the acquisition of HR MRI images often requires prolonged scanning time, which causes the patient's physical and mental discomfort. The patient's slight movement may produce the motion artifacts and make the obtained MRI image become blurry, affecting the accuracy of clinical diagnosis. To tackle these problems, we propose a novel method, Mamba-enhanced Diffusion Model (MDM) for perception-aware blind super-resolution of Magnetic Resonance Imaging, which includes two important components: kernel noise estimator and SR reconstructor. Specifically, we propose a Perception-aware Blur Kernel Noise estimator (PBKN estimator), which takes advantage of the diffusion model to estimate the blur kernel from lowresolution images. Meanwhile, we construct a novel progressive feature reconstructor, which takes the estimated blur kernel and the content information of LR images as prior knowledge to reconstruct more accurate SR MRI images by using diffusion model. Moreover, we design a novel Semantic Information Fusion Mamba (SIF-Mamba) module for the SR reconstruction task. SIF-Mamba is specifically designed in the progressive feature reconstructor to capture the global context of MRI images and improve the feature reconstruction. The extensive experiments demonstrate that our proposed MDM achieves better SR reconstruction results than several outstanding methods. Our codes are available at https://github.com/YXDBright/MDM.

NeuroRAD-FM: A Foundation Model for Neuro-Oncology with Distributionally Robust Training

Moinak Bhattacharya, Angelica P. Kurtz, Fabio M. Iwamoto, Prateek Prasanna, Gagandeep Singh

arxiv logopreprintSep 18 2025
Neuro-oncology poses unique challenges for machine learning due to heterogeneous data and tumor complexity, limiting the ability of foundation models (FMs) to generalize across cohorts. Existing FMs also perform poorly in predicting uncommon molecular markers, which are essential for treatment response and risk stratification. To address these gaps, we developed a neuro-oncology specific FM with a distributionally robust loss function, enabling accurate estimation of tumor phenotypes while maintaining cross-institution generalization. We pretrained self-supervised backbones (BYOL, DINO, MAE, MoCo) on multi-institutional brain tumor MRI and applied distributionally robust optimization (DRO) to mitigate site and class imbalance. Downstream tasks included molecular classification of common markers (MGMT, IDH1, 1p/19q, EGFR), uncommon alterations (ATRX, TP53, CDKN2A/2B, TERT), continuous markers (Ki-67, TP53), and overall survival prediction in IDH1 wild-type glioblastoma at UCSF, UPenn, and CUIMC. Our method improved molecular prediction and reduced site-specific embedding differences. At CUIMC, mean balanced accuracy rose from 0.744 to 0.785 and AUC from 0.656 to 0.676, with the largest gains for underrepresented endpoints (CDKN2A/2B accuracy 0.86 to 0.92, AUC 0.73 to 0.92; ATRX AUC 0.69 to 0.82; Ki-67 accuracy 0.60 to 0.69). For survival, c-index improved at all sites: CUIMC 0.592 to 0.597, UPenn 0.647 to 0.672, UCSF 0.600 to 0.627. Grad-CAM highlighted tumor and peri-tumoral regions, confirming interpretability. Overall, coupling FMs with DRO yields more site-invariant representations, improves prediction of common and uncommon markers, and enhances survival discrimination, underscoring the need for prospective validation and integration of longitudinal and interventional signals to advance precision neuro-oncology.

Compartment-specific Fat Distribution Profiles have Distinct Relationships with Cardiovascular Ageing and Future Cardiovascular Events

Maldonado-Garcia, C., Salih, A., Neubauer, S., Petersen, S. E., Raisi-Estabragh, Z.

medrxiv logopreprintSep 18 2025
Obesity is a global public health priority and a major risk factor for cardiovascular disease (CVD). Emerging evidence indicates variation in pathologic consequences of obesity deposition across different body compartments. Biological heart age may be estimated from imaging measures of cardiac structure and function and captures risk beyond traditional measures. Using cardiac and abdominal magnetic resonance imaging (MRI) from 34,496 UK Biobank participants and linked health record data, we investigated how compartment-specific obesity phenotypes relate to cardiac ageing and incident CVD risk. Biological heart age was estimated using machine learning from 56 cardiac MRI phenotypes. K-means clustering of abdominal visceral (VAT), abdominal subcutaneous (ASAT), and pericardial (PAT) adiposity identified a high-risk cluster (characterised by greater adiposity across all three depots) associated with accelerated cardiac ageing - and a lower-risk cluster linked to decelerated ageing. These clusters provided more precise stratification of cardiovascular ageing trajectories than established body mass index categories. Mediation analysis showed that VAT and PAT explained 13.7% and 11.9% of obesity-associated CVD risk, respectively, whereas ASAT contributed minimally, with effects more pronounced in males. Thus, cardiovascular risk appears to be driven primarily by visceral and pericardial rather than subcutaneous fat. Our findings reveal a distinct risk profile of compartment-specific fat distributions and show the importance of pericardial and visceral fat as drivers of greater cardiovascular ageing. Advanced image-defined adiposity profiling may enhance CVD risk prediction beyond anthropometric measures and enhance mechanistic understanding.

MDFNet: a multi-dimensional feature fusion model based on structural magnetic resonance imaging representations for brain age estimation.

Zhang C, Nan P, Song L, Wang Y, Su K, Zheng Q

pubmed logopapersSep 18 2025
Brain age estimation plays a significant role in understanding the aging process and its relationship with neurodegenerative diseases. The aim of the study is to devise a unified multi-dimensional feature fusion model (MDFNet) to enhance the brain age estimation solely on structural MRI but with a diverse representation of whole brain, tissue segmentation of gray matter volume, node message passing of brain network, edge-based graph path convolution of brain connectivity, and demographic data. The MDFNet was developed by devising and integrating a whole-brain-level Euclidean-Convolution channel (WBEC-channel), a tissue-level Euclidean-convolution channel (TEC-channel), a Graph-convolution channel based on node message passing (nodeGCN-channel) and an edge-based graph path convolution channel on brain connectivity (edgeGCN-channel), and a multilayer perceptron (MLP) channel for demographic data (MLP-channel) to enhance the multi-dimensional feature fusion. The MDFNet was validated on 1872 healthy subjects from four public datasets, and applied to an independent cohort of Alzheimer's Disease (AD) patients. The interpretability analysis and normative modeling of the MDFNet in brain age estimation were also performed. The MDFNet achieved a superior performance of Mean Absolute Error (MAE) of 4.396 ± 0.244 years, a Pearson Correlation Coefficient (PCC) of 0.912 ± 0.002, and a Spearman's Rank Correlation (SRCC) of 0.819 ± 0.015 when comparing with the state-of-the-art deep learning models. The AD group exhibited a significantly greater brain age gap (BAG) than health group (P < 0.05), and the normative modeling also exhibited a significantly higher mean Z-scores of AD patients than healthy subjects (P < 0.05). The interpretability was also visualized at both the group and individual level, enhancing the reliability of the MDFNet. The MDFNet enhanced the brain age estimation solely on structural MRI by employing a multi-dimensional feature integration strategy.
Page 13 of 1601593 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.