Sort by:
Page 10 of 1241236 results

Diffusion-based arbitrary-scale magnetic resonance image super-resolution via progressive k-space reconstruction and denoising.

Wang J, Shi Z, Gu X, Yang Y, Sun J

pubmed logopapersSep 20 2025
Acquiring high-resolution Magnetic resonance (MR) images is challenging due to constraints such as hardware limitations and acquisition times. Super-resolution (SR) techniques offer a potential solution to enhance MR image quality without changing the magnetic resonance imaging (MRI) hardware. However, typical SR methods are designed for fixed upsampling scales and often produce over-smoothed images that lack fine textures and edge details. To address these issues, we propose a unified diffusion-based framework for arbitrary-scale in-plane MR image SR, dubbed Progressive Reconstruction and Denoising Diffusion Model (PRDDiff). Specifically, the forward diffusion process of PRDDiff gradually masks out high-frequency components and adds Gaussian noise to simulate the downsampling process in MRI. To reverse this process, we propose an Adaptive Resolution Restoration Network (ARRNet), which introduces a current step corresponding to the resolution of input MR image and an ending step corresponding to the target resolution. This design guide the ARRNet to recovering the clean MR image at the target resolution from input MR image. The SR process starts from an MR image at the initial resolution and gradually enhances them to higher resolution by progressively reconstructing high-frequency components and removing the noise based on the recovered MR image from ARRNet. Furthermore, we design a multi-stage SR strategy that incrementally enhances resolution through multiple sequential stages to further improve recovery accuracy. Each stage utilizes a set number of sampling steps from PRDDiff, guided by a specific ending step, to recover details pertinent to the predefined intermediate resolution. We conduct extensive experiments on fastMRI knee dataset, fastMRI brain dataset, our real-collected LR-HR brain dataset, and clinical pediatric cerebral palsy (CP) dataset, including T1-weighted and T2-weighted images for the brain and proton density-weighted images for the knee. The results demonstrate that PRDDiff outperforms previous MR image super-resolution methods in term of reconstruction accuracy, generalization, and downstream lesion segmentation accuracy and CP classification performance. The code is publicly available at https://github.com/Jiazhen-Wang/PRDDiff-main.

Artificial intelligence in the diagnosis of multiple sclerosis using brain imaging modalities: A systematic review and meta-analysis of algorithms.

Darrudi R, Hosseini A, Emami H, Roshanpoor A, Nahayati MA

pubmed logopapersSep 19 2025
Multiple sclerosis (MS) diagnosis remains challenging due to its heterogeneous clinical manifestations and the absence of a definitive diagnostic test. Conventional magnetic resonance imaging, while central to diagnosis, faces limitations in specificity and inter-rater variability. Artificial intelligence offers promising solutions for enhancing medical imaging analysis in MS, yet its efficacy requires systematic validation. This systematic review and meta-analysis followed Preferred Reporting Items for Systematic Review and Meta-Analysis guidelines. We searched Embase, PubMed, Web of Science, Scopus, Google Scholar, and gray literature (inception to January 5, 2025) for case-control studies applying AI to magnetic resonance imaging-based MS diagnosis. A random-effects model pooled sensitivity, specificity, and accuracy. Heterogeneity was assessed via the Q-statistic and I². Meta-regression evaluated pixel count impact. Meta-analysis revealed pooled sensitivity, specificity, and accuracy of 93%, 95%, and 94%, respectively, showcasing the efficacy of AI models in MS diagnosis. Additionally, meta-regression analysis showed no significant correlation between the number of pixels and diagnostic performance parameters. Sensitivity analysis confirmed the robustness of results, while publication bias assessment indicated no evidence of bias. AI-based algorithms show promise in augmenting traditional diagnostic approaches for MS, offering accurate and timely diagnosis. Further research is warranted to standardize AI methodologies and optimize their integration into clinical practice. This study contributes to the growing evidence supporting AI's role in enhancing diagnostics and patient care in MS.

TractoTransformer: Diffusion MRI Streamline Tractography using CNN and Transformer Networks

Itzik Waizman, Yakov Gusakov, Itay Benou, Tammy Riklin Raviv

arxiv logopreprintSep 19 2025
White matter tractography is an advanced neuroimaging technique that reconstructs the 3D white matter pathways of the brain from diffusion MRI data. It can be framed as a pathfinding problem aiming to infer neural fiber trajectories from noisy and ambiguous measurements, facing challenges such as crossing, merging, and fanning white-matter configurations. In this paper, we propose a novel tractography method that leverages Transformers to model the sequential nature of white matter streamlines, enabling the prediction of fiber directions by integrating both the trajectory context and current diffusion MRI measurements. To incorporate spatial information, we utilize CNNs that extract microstructural features from local neighborhoods around each voxel. By combining these complementary sources of information, our approach improves the precision and completeness of neural pathway mapping compared to traditional tractography models. We evaluate our method with the Tractometer toolkit, achieving competitive performance against state-of-the-art approaches, and present qualitative results on the TractoInferno dataset, demonstrating strong generalization to real-world data.

Lightweight Transfer Learning Models for Multi-Class Brain Tumor Classification: Glioma, Meningioma, Pituitary Tumors, and No Tumor MRI Screening.

Gorenshtein A, Liba T, Goren A

pubmed logopapersSep 19 2025
Glioma, pituitary tumors, and meningiomas constitute the major types of primary brain tumors. The challenge in achieving a definitive diagnosis stem from the brain's complex structure, limited accessibility for precise imaging, and the resemblance between different types of tumors. An alternative and promising solution is the application of artificial intelligence (AI), specifically through deep learning models. We developed multiple lightweight deep learning models ResNet-18 (both pretrained on ImageNet and trained from scratch), ResNet-34, ResNet-50, and a custom CNN to classify glioma, meningioma, pituitary tumor, and no tumor MRI scans. A dataset of 7023 images was employed, split into 5712 for training and 1311 for validation. Each model was evaluated via accuracy, area under the curve (AUC), sensitivity, specificity, and confusion matrices. We compared our models to SOTA methods such as SAlexNet and TumorGANet, highlighting computational efficiency and classification performance. ResNet pretrained achieved 98.5-99.2% accuracy and near-perfect validation metrics, with an overall AUC of 1.0 and average sensitivity and specificity both exceeding 97% across the four classes. In comparison, ResNet-18 trained from scratch and the custom CNN achieved 91.99% and 87.03% accuracy, respectively, with AUCs ranging from 0.94 to 1.00. Error analysis revealed moderate misclassification of meningiomas as gliomas in non-pretrained models. Learning rate optimization facilitated stable convergence, and loss metrics indicated effective generalization with minimal overfitting. Our findings confirm that a moderately sized, transfer-learned network (ResNet-18) can deliver high diagnostic accuracy and robust performance for four-class brain tumor classification. This approach aligns with the goal of providing efficient, accurate, and easily deployable AI solutions, particularly for smaller clinical centers with limited computational resources. Future studies should incorporate multi-sequence MRI and extended patient cohorts to further validate these promising results.

SLaM-DiMM: Shared Latent Modeling for Diffusion Based Missing Modality Synthesis in MRI

Bhavesh Sandbhor, Bheeshm Sharma, Balamurugan Palaniappan

arxiv logopreprintSep 19 2025
Brain MRI scans are often found in four modalities, consisting of T1-weighted with and without contrast enhancement (T1ce and T1w), T2-weighted imaging (T2w), and Flair. Leveraging complementary information from these different modalities enables models to learn richer, more discriminative features for understanding brain anatomy, which could be used in downstream tasks such as anomaly detection. However, in clinical practice, not all MRI modalities are always available due to various reasons. This makes missing modality generation a critical challenge in medical image analysis. In this paper, we propose SLaM-DiMM, a novel missing modality generation framework that harnesses the power of diffusion models to synthesize any of the four target MRI modalities from other available modalities. Our approach not only generates high-fidelity images but also ensures structural coherence across the depth of the volume through a dedicated coherence enhancement mechanism. Qualitative and quantitative evaluations on the BraTS-Lighthouse-2025 Challenge dataset demonstrate the effectiveness of the proposed approach in synthesizing anatomically plausible and structurally consistent results. Code is available at https://github.com/BheeshmSharma/SLaM-DiMM-MICCAI-BraTS-Challenge-2025.

Enhancing the reliability of Alzheimer's disease prediction in MRI images.

Islam J, Furqon EN, Farady I, Alex JSR, Shih CT, Kuo CC, Lin CY

pubmed logopapersSep 19 2025
Alzheimer's Disease (AD) diagnostic procedures employing Magnetic Resonance Imaging (MRI) analysis encounter considerable obstacles pertaining to reliability and accuracy, especially when deep learning models are utilized within clinical environments. Present deep learning methodologies for MRI-based AD detection frequently demonstrate spatial dependencies and exhibit deficiencies in robust validation mechanisms. Extant validation techniques inadequately integrate anatomical knowledge and exhibit challenges in feature interpretability across a range of imaging conditions. To address this fundamental gap, we introduce a reverse validation paradigm that systematically repositions anatomical structures to test whether models recognize features based on anatomical characteristics rather than spatial memorization. Our research endeavors to rectify these shortcomings by proposing three innovative methodologies: Feature Position Invariance (FPI) for the validation of anatomical features, biomarker location augmentation aimed at enhancing spatial learning, and High-Confidence Cohort (HCC) selection for the reliable identification of training samples. The FPI methodology leverages reverse validation approach to substantiate model predictions through the reconstruction of anatomical features, bolstered by our extensive data augmentation strategy and a confidence-based sample selection technique. The application of this framework utilizing YOLO and MobileNet architecture has yielded significant advancements in both binary and three-class AD classification tasks, achieving state-of-the-art accuracy with enhancements of 2-4 % relative to baseline models. Additionally, our methodology generates interpretable insights through anatomy-aligned validation, establishing direct links between model decisions and neuropathological features. Our experimental findings reveal consistent performance across various anatomical presentations, signifying that the framework effectively enhances both the reliability and interpretability of AD diagnosis through MRI analysis, thereby equipping medical professionals with a more robust diagnostic support system.

Deep learning-based acceleration and denoising of 0.55T MRI for enhanced conspicuity of vestibular Schwannoma post contrast administration.

Hinsen M, Nagel A, Heiss R, May M, Wiesmueller M, Mathy C, Zeilinger M, Hornung J, Mueller S, Uder M, Kopp M

pubmed logopapersSep 19 2025
Deep-learning (DL) based MRI denoising techniques promise improved image quality and shorter examination times. This advancement is particularly beneficial for 0.55T MRI, where the inherently lower signal-to-noise (SNR) ratio can compromise image quality. Sufficient SNR is crucial for the reliable detection of vestibular schwannoma (VS). The objective of this study is to evaluate the VS conspicuity and acquisition time (TA) of 0.55T MRI examinations with contrast agents using a DL-denoising algorithm. From January 2024 to October 2024, we retrospectively included 30 patients with VS (9 women). We acquired a clinical reference protocol of the cerebellopontine angle containing a T1w fat-saturated (fs) axial (number of signal averages [NSA] 4) and a T1w Spectral Attenuated Inversion Recovery (SPAIR) coronal (NSA 2) sequence after contrast agent (CA) application without advanced DL-based denoising (w/o DL). We reconstructed the T1w fs CA sequence axial and the T1w SPAIR CA coronal with full DL-denoising mode without change of NSA, and secondly with 1 NSA for T1w fs CA axial and T1w SPAIR coronal (DL&1NSA). Each sequence was rated on a 5-point Likert scale (1: insufficient, 3: moderate, clinically sufficient; 5: perfect) for: overall image quality; VS conspicuity, and artifacts. Secondly, we analyzed the reliability of the size measurements. Two radiologists specializing in head and neck imaging performed the reading and measurements. The Wilcoxon Signed-Rank Test was used for non-parametric statistical comparison. The DL&4NSA axial/coronal study sequence achieved the highest overall IQ (median 4.9). The image quality (IQ) for DL&1NSA was higher (M: 4.0) than for the reference sequence (w/o DL; median 4.0 versus 3.5, each p < 0.01). Similarly, the VS conspicuity was best for DL&4NSA (M: 4.9), decreased for DL&1NSA (M: 4.1), and was lower but still sufficient for w/o DL (M: 3.7, each p < 0.01). The TA for the axial and coronal post-contrast sequences was 8:59 minutes for DL&4NSA and w/o DL and decreased to 3:24 minutes with DL&1NSA. This study underlines that advanced DL-based denoising techniques can reduce the examination time by more than half while simultaneously improving image quality.

Synthetizing SWI from 3T to 7T by generative diffusion network for deep medullary veins visualization.

Li S, Deng X, Li Q, Zhen Z, Han L, Chen K, Zhou C, Chen F, Huang P, Zhang R, Chen H, Zhang T, Chen W, Tan T, Liu C

pubmed logopapersSep 19 2025
Ultrahigh-field susceptibility-weighted imaging (SWI) provides excellent tissue contrast and anatomical details of brain. However, ultrahigh-field magnetic resonance (MR) scanner often expensive and provides uncomfortable noise experience for patient. Therefore, some deep learning approaches have been proposed to synthesis high-field MR images from low-filed MR images, most existing methods rely on generative adversarial network (GAN) and achieve acceptable results. While the dilemma in train process of GAN, generally recognized, limits the synthesis performance in SWI images for its microvascular structure. Diffusion models, as a promising alternative, indirectly characterize the gaussian noise to the target image with a slow sampling through a considerable number of steps. To address this limitation, we presented a generative diffusion-based deep learning imaging model, named conditional denoising diffusion probabilistic model (CDDPM), for synthesizing high-field (7 Tesla) SWI images form low-field (3 Tesla) SWI images and assess clinical applicability. Crucially, the experiment results demonstrate that the diffusion-based model that synthesizes 7T SWI from 3T SWI images is potentially to providing an alternative way to achieve the advantages of ultra-high field 7T MR images for deep medullary veins visualization.

Multi-modal CT Perfusion-based Deep Learning for Predicting Stroke Lesion Outcomes in Complete and No Recanalization Scenarios.

Yang H, George Y, Mehta D, Lin L, Chen C, Yang D, Sun J, Lau KF, Bain C, Yang Q, Parsons MW, Ge Z

pubmed logopapersSep 19 2025
Predicting the final location and volume of lesions in acute ischemic stroke (AIS) is crucial for clinical management. While CT perfusion (CTP) imaging is routinely used for estimating lesion outcomes, conventional threshold-based methods have limitations. We developed specialized outcome prediction deep learning models that predict infarct core in successful reperfusion cases and the combined core-penumbra region in unsuccessful reperfusion cases. We developed single-modal and multi-modal deep learning models using CTP parameter maps to predict the final infarct lesion on follow-up diffusion-weighted imaging (DWI). Using a multi-center dataset from multiple sites, deep learning models were developed and evaluated separately for patients with complete recanalization (CR, successful reperfusion, n=350) and no recanalization (NR, unsuccessful reperfusion, n=138) after treatment. The CR model was designed to predict the infarct core region, while the NR model predicted the expanded hypoperfused tissue encompassing both core and penumbra regions. Five-fold cross-validation was performed for robust evaluation. The multi-modal 3D nnU-Net model demonstrated superior performance, achieving mean Dice scores of 35.36% in CR patients and 50.22% in NR patients. This significantly outperformed the current clinical used method, providing more accurate outcome estimates than the conventional single-modality threshold-based measures which yielded dice scores of 15.73% and 39.71% for CR and NR groups respectively. Our approach offered both successful reperfusion and unsuccessful reperfusion estimations for potential treatment outcomes, enabling clinicians to better evaluate treatment eligibility for reperfusion therapies and assess potential treatment benefits. This advancement facilitates more personalized treatment recommendations and has the potential to significantly enhance clinical decision-making in AIS management by providing more accurate tissue outcome predictions than conventional single-modality threshold-based approaches. AIS=acute ischemic stroke; CR=complete recanalization; NR=no recanalization; DT=delay time; IQR=interquartile range; GT=ground truth; HD95=95% Hausdorff distance; ASSD=average symmetric surface distance; MLV=mismatch lesion volume.

Assessing the Feasibility of Deep Learning-Based Attenuation Correction Using Photon Emission Data in 18F-FDG Images for Dedicated Head and Neck PET Scanners.

Shahrbabaki Mofrad M, Ghafari A, Amiri Tehrani Zade A, Aghahosseini F, Ay M, Farzenefar S, Sheikhzadeh P

pubmed logopapersSep 18 2025
&#xD;This study aimed to evaluate the use of deep learning techniques to produce measured attenuation-corrected (MAC) images from non-attenuation-corrected (NAC) F-FDG PET images, focusing on head and neck imaging.&#xD;Materials and Methods:&#xD;A Residual Network (ResNet) was used to train 2D head and neck PET images from 114 patients (12,068 slices) without pathology or artifacts. For validation during training and testing, 21 and 24 patient images without pathology and artifacts were used, and 12 images with pathologies were used for independent testing. Prediction accuracy was assessed using metrics such as RMSE, SSIM, PSNR, and MSE. The impact of unseen pathologies on the network was evaluated by measuring contrast and SNR in tumoral/hot regions of both reference and predicted images. Statistical significance between the contrast and SNR of reference and predicted images was assessed using a paired-sample t-test.&#xD;Results:&#xD;Two nuclear medicine physicians evaluated the predicted head and neck MAC images, finding them visually similar to reference images. In the normal test group, PSNR, SSIM, RMSE, and MSE were 44.02 ± 1.77, 0.99 ± 0.002, 0.007 ± 0.0019, and 0.000053 ± 0.000030, respectively. For the pathological test group, values were 43.14 ± 2.10, 0.99 ± 0.005, 0.0078 ± 0.0015, and 0.000063 ± 0.000026, respectively. No significant differences were found in SNR and contrast between reference and test images without pathology (p-value>0.05), but significant differences were found in pathological images (p-value <0.05)&#xD;Conclusion:&#xD;The deep learning network demonstrated the ability to directly generate head and neck MAC images that closely resembled the reference images. With additional training data, the model has the potential to be utilized in dedicated head and neck PET scanners without the requirement of computed tomography [CT] for attenuation correction.&#xD.
Page 10 of 1241236 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.