Sort by:
Page 12 of 1601593 results

Training the next generation of physicians for artificial intelligence-assisted clinical neuroradiology: ASNR MICCAI Brain Tumor Segmentation (BraTS) 2025 Lighthouse Challenge education platform

Raisa Amiruddin, Nikolay Y. Yordanov, Nazanin Maleki, Pascal Fehringer, Athanasios Gkampenis, Anastasia Janas, Kiril Krantchev, Ahmed Moawad, Fabian Umeh, Salma Abosabie, Sara Abosabie, Albara Alotaibi, Mohamed Ghonim, Mohanad Ghonim, Sedra Abou Ali Mhana, Nathan Page, Marko Jakovljevic, Yasaman Sharifi, Prisha Bhatia, Amirreza Manteghinejad, Melisa Guelen, Michael Veronesi, Virginia Hill, Tiffany So, Mark Krycia, Bojan Petrovic, Fatima Memon, Justin Cramer, Elizabeth Schrickel, Vilma Kosovic, Lorenna Vidal, Gerard Thompson, Ichiro Ikuta, Basimah Albalooshy, Ali Nabavizadeh, Nourel Hoda Tahon, Karuna Shekdar, Aashim Bhatia, Claudia Kirsch, Gennaro D'Anna, Philipp Lohmann, Amal Saleh Nour, Andriy Myronenko, Adam Goldman-Yassen, Janet R. Reid, Sanjay Aneja, Spyridon Bakas, Mariam Aboian

arxiv logopreprintSep 21 2025
High-quality reference standard image data creation by neuroradiology experts for automated clinical tools can be a powerful tool for neuroradiology & artificial intelligence education. We developed a multimodal educational approach for students and trainees during the MICCAI Brain Tumor Segmentation Lighthouse Challenge 2025, a landmark initiative to develop accurate brain tumor segmentation algorithms. Fifty-six medical students & radiology trainees volunteered to annotate brain tumor MR images for the BraTS challenges of 2023 & 2024, guided by faculty-led didactics on neuropathology MRI. Among the 56 annotators, 14 select volunteers were then paired with neuroradiology faculty for guided one-on-one annotation sessions for BraTS 2025. Lectures on neuroanatomy, pathology & AI, journal clubs & data scientist-led workshops were organized online. Annotators & audience members completed surveys on their perceived knowledge before & after annotations & lectures respectively. Fourteen coordinators, each paired with a neuroradiologist, completed the data annotation process, averaging 1322.9+/-760.7 hours per dataset per pair and 1200 segmentations in total. On a scale of 1-10, annotation coordinators reported significant increase in familiarity with image segmentation software pre- and post-annotation, moving from initial average of 6+/-2.9 to final average of 8.9+/-1.1, and significant increase in familiarity with brain tumor features pre- and post-annotation, moving from initial average of 6.2+/-2.4 to final average of 8.1+/-1.2. We demonstrate an innovative offering for providing neuroradiology & AI education through an image segmentation challenge to enhance understanding of algorithm development, reinforce the concept of data reference standard, and diversify opportunities for AI-driven image analysis among future physicians.

Diffusion-based arbitrary-scale magnetic resonance image super-resolution via progressive k-space reconstruction and denoising.

Wang J, Shi Z, Gu X, Yang Y, Sun J

pubmed logopapersSep 20 2025
Acquiring high-resolution Magnetic resonance (MR) images is challenging due to constraints such as hardware limitations and acquisition times. Super-resolution (SR) techniques offer a potential solution to enhance MR image quality without changing the magnetic resonance imaging (MRI) hardware. However, typical SR methods are designed for fixed upsampling scales and often produce over-smoothed images that lack fine textures and edge details. To address these issues, we propose a unified diffusion-based framework for arbitrary-scale in-plane MR image SR, dubbed Progressive Reconstruction and Denoising Diffusion Model (PRDDiff). Specifically, the forward diffusion process of PRDDiff gradually masks out high-frequency components and adds Gaussian noise to simulate the downsampling process in MRI. To reverse this process, we propose an Adaptive Resolution Restoration Network (ARRNet), which introduces a current step corresponding to the resolution of input MR image and an ending step corresponding to the target resolution. This design guide the ARRNet to recovering the clean MR image at the target resolution from input MR image. The SR process starts from an MR image at the initial resolution and gradually enhances them to higher resolution by progressively reconstructing high-frequency components and removing the noise based on the recovered MR image from ARRNet. Furthermore, we design a multi-stage SR strategy that incrementally enhances resolution through multiple sequential stages to further improve recovery accuracy. Each stage utilizes a set number of sampling steps from PRDDiff, guided by a specific ending step, to recover details pertinent to the predefined intermediate resolution. We conduct extensive experiments on fastMRI knee dataset, fastMRI brain dataset, our real-collected LR-HR brain dataset, and clinical pediatric cerebral palsy (CP) dataset, including T1-weighted and T2-weighted images for the brain and proton density-weighted images for the knee. The results demonstrate that PRDDiff outperforms previous MR image super-resolution methods in term of reconstruction accuracy, generalization, and downstream lesion segmentation accuracy and CP classification performance. The code is publicly available at https://github.com/Jiazhen-Wang/PRDDiff-main.

Machine learning and deep learning approaches in MRI for quantifying and staging fatty liver disease: A systematic review.

Elhaie M, Koozari A, Koohi H, Alqurain QT

pubmed logopapersSep 20 2025
Fatty liver disease, encompassing non-alcoholic fatty liver disease (NAFLD) and alcohol-related liver disease (ALD), affects ∼25% of adults globally. Magnetic resonance imaging (MRI), particularly proton density fat fraction (PDFF), is the non-invasive gold standard for hepatic steatosis quantification, but its clinical use is limited by cost, protocol variability, a analysis time. Machine learning (ML) and deep learning (DL) techniques, including convolutional neural networks (CNNs) and generative adversarial networks (GANs), show promise in enhancing MRI-based quantification and staging. To systematically review the diagnostic accuracy, reproducibility, and clinical utility of ML and DL techniques applied to MRI for quantifying and staging hepatic steatosis in fatty liver disease. This systematic review was registered in PROSPERO (CRD420251121056) and adhered to PRISMA guidelines, searching PubMed, Cochrane Library, Scopus, IEEE Xplore, Web of Science, Google Scholar, and grey literature for studies on ML/DL applications in MRI for fatty liver disease. Eligible studies involved human participants with suspected/confirmed NAFLD, NASH, or ALD, using ML/DL (e.g., CNNs, GANs) on MRI data (e.g., PDFF, Dixon MRI). Outcomes included diagnostic accuracy (sensitivity, specificity, area under the curve (AUC)), reproducibility (intraclass correlation coefficient (ICC), Dice), and clinical utility (e.g., treatment planning). Two reviewers screened studies, extracted data, and assessed risk of bias using QUADAS-2. Narrative synthesis and meta-analysis (where feasible) were conducted. From 2550 records, 15 studies (n = 25-1038) were included, using CNNs, GANs, radiomics, and dictionary learning on PDFF, chemical shift-encoded MRI, or Dixon MRI. Diagnostic accuracy was high (AUC 0.85-0.97, r = 0.94-0.99 vs. biopsy/MRS), with reproducibility metrics robust (ICC 0.94-0.99, Dice 0.87-0.94). Efficiency improved significantly (e.g., processing <0.16 s/slice, scan time <1 min). Clinical utility included virtual biopsies, surgical planning, and treatment monitoring. Limitations included small sample sizes, single-center designs, and vendor variability. ML and DL enhance MRI-based hepatic steatosis assessment, offering high accuracy, reproducibility, and efficiency. CNNs excel in segmentation/PDFF quantification, while GANs and radiomics aid free-breathing MRI and NASH staging. Multi-center studies and standardization are needed for clinical integration.

Uncovering genetic architecture of the heart via genetic association studies of unsupervised deep learning derived endophenotypes.

You L, Zhao X, Xie Z, Patel KA, Chen C, Kitkungvan D, Mohammed KK, Narula N, Arbustini E, Cassidy CK, Narula J, Zhi D

pubmed logopapersSep 20 2025
Recent genome-wide association studies (GWAS) have effectively linked genetic variants to quantitative traits derived from time-series cardiac magnetic resonance imaging, revealing insights into cardiac morphology and function. Deep learning approach generally requires extensive supervised training on manually annotated data. In this study, we developed a novel framework using a 3D U-architecture autoencoder (cineMAE) to learn deep image phenotypes from cardiac magnetic resonance (CMR) imaging for genetic discovery, focusing on long-axis two-chamber and four-chamber views. We trained a masked autoencoder to develop <b>U</b> nsupervised <b>D</b> erived <b>I</b> mage <b>P</b> henotypes for heart (Heart-UDIPs). These representations were found to be informative to indicate various heart-specific phenotypes (e.g., left ventricular hypertrophy) and diseases (e.g., hypertrophic cardiomyopathy). GWAS on Heart UDIP identified 323 lead SNP and 628 SNP-prioritized genes, which exceeded previous methods. The genes identified by method described herein, exhibited significant associations with cardiac function and showed substantial enrichment in pathways related to cardiac disorders. These results underscore the utility of our Heart-UDIP approach in enhancing the discovery potential for genetic associations, without the need for clinically defined phenotypes or manual annotations.

Longitudinal Progression of Traumatic Bone Marrow Lesions Following Anterior Cruciate Ligament Injury: Associations With Knee Pain and Concomitant Injuries.

Stirling CE, Pavlovic N, Manske SL, Walker REA, Boyd SK

pubmed logopapersSep 20 2025
Traumatic bone marrow lesions (BMLs) occur in ~80% of anterior cruciate ligament (ACL) injuries, typically in the lateral femoral condyle (LFC) and lateral tibial plateau (LTP). Associated with microfractures, vascular proliferation, inflammation, and bone density changes, BMLs may contribute to posttraumatic osteoarthritis. However, their relationship with knee pain is unclear. This study examined the prevalence, characteristics, and progression of BMLs after ACL injury, focusing on associations with pain, meniscal and ligament injuries, and fractures. Participants (N = 100, aged 14-55) with MRI-confirmed ACL tears were scanned within 6 weeks post-injury (mean = 30.0, SD = 9.6 days). BML volumes were quantified using a validated machine learning method, and pain assessed via the Knee Injury and Osteoarthritis Outcome Score (KOOS). Analyses included t-tests, Mann-Whitney U, chi-square, and Spearman correlations with false discovery rate correction. BMLs were present in 95% of participants, primarily in the LFC and LTP. Males had 33% greater volumes than females (p < 0.05), even after adjusting for BMI. Volumes were higher in cases with depression fractures (p = 0.022) and negatively associated with baseline KOOS Symptoms. At 1 year, 92.68% of lesions (based on lesion counts) resolved in Nonsurgical participants, with a 96.13% volume reduction (p < 0.001). KOOS outcomes were similar between groups, except for slightly better Pain scores in the Nonsurgical group. Baseline Pain and Sport scores predicted follow-up outcomes. BMLs are common post-ACL injury, vary by sex and fracture status, and modestly relate to early symptoms. Most resolve within a year, with limited long-term differences by surgical status.

A Novel Metric for Detecting Memorization in Generative Models for Brain MRI Synthesis

Antonio Scardace, Lemuel Puglisi, Francesco Guarnera, Sebastiano Battiato, Daniele Ravì

arxiv logopreprintSep 20 2025
Deep generative models have emerged as a transformative tool in medical imaging, offering substantial potential for synthetic data generation. However, recent empirical studies highlight a critical vulnerability: these models can memorize sensitive training data, posing significant risks of unauthorized patient information disclosure. Detecting memorization in generative models remains particularly challenging, necessitating scalable methods capable of identifying training data leakage across large sets of generated samples. In this work, we propose DeepSSIM, a novel self-supervised metric for quantifying memorization in generative models. DeepSSIM is trained to: i) project images into a learned embedding space and ii) force the cosine similarity between embeddings to match the ground-truth SSIM (Structural Similarity Index) scores computed in the image space. To capture domain-specific anatomical features, training incorporates structure-preserving augmentations, allowing DeepSSIM to estimate similarity reliably without requiring precise spatial alignment. We evaluate DeepSSIM in a case study involving synthetic brain MRI data generated by a Latent Diffusion Model (LDM) trained under memorization-prone conditions, using 2,195 MRI scans from two publicly available datasets (IXI and CoRR). Compared to state-of-the-art memorization metrics, DeepSSIM achieves superior performance, improving F1 scores by an average of +52.03% over the best existing method. Code and data of our approach are publicly available at the following link: https://github.com/brAIn-science/DeepSSIM.

Synthetizing SWI from 3T to 7T by generative diffusion network for deep medullary veins visualization.

Li S, Deng X, Li Q, Zhen Z, Han L, Chen K, Zhou C, Chen F, Huang P, Zhang R, Chen H, Zhang T, Chen W, Tan T, Liu C

pubmed logopapersSep 19 2025
Ultrahigh-field susceptibility-weighted imaging (SWI) provides excellent tissue contrast and anatomical details of brain. However, ultrahigh-field magnetic resonance (MR) scanner often expensive and provides uncomfortable noise experience for patient. Therefore, some deep learning approaches have been proposed to synthesis high-field MR images from low-filed MR images, most existing methods rely on generative adversarial network (GAN) and achieve acceptable results. While the dilemma in train process of GAN, generally recognized, limits the synthesis performance in SWI images for its microvascular structure. Diffusion models, as a promising alternative, indirectly characterize the gaussian noise to the target image with a slow sampling through a considerable number of steps. To address this limitation, we presented a generative diffusion-based deep learning imaging model, named conditional denoising diffusion probabilistic model (CDDPM), for synthesizing high-field (7 Tesla) SWI images form low-field (3 Tesla) SWI images and assess clinical applicability. Crucially, the experiment results demonstrate that the diffusion-based model that synthesizes 7T SWI from 3T SWI images is potentially to providing an alternative way to achieve the advantages of ultra-high field 7T MR images for deep medullary veins visualization.

SLaM-DiMM: Shared Latent Modeling for Diffusion Based Missing Modality Synthesis in MRI

Bhavesh Sandbhor, Bheeshm Sharma, Balamurugan Palaniappan

arxiv logopreprintSep 19 2025
Brain MRI scans are often found in four modalities, consisting of T1-weighted with and without contrast enhancement (T1ce and T1w), T2-weighted imaging (T2w), and Flair. Leveraging complementary information from these different modalities enables models to learn richer, more discriminative features for understanding brain anatomy, which could be used in downstream tasks such as anomaly detection. However, in clinical practice, not all MRI modalities are always available due to various reasons. This makes missing modality generation a critical challenge in medical image analysis. In this paper, we propose SLaM-DiMM, a novel missing modality generation framework that harnesses the power of diffusion models to synthesize any of the four target MRI modalities from other available modalities. Our approach not only generates high-fidelity images but also ensures structural coherence across the depth of the volume through a dedicated coherence enhancement mechanism. Qualitative and quantitative evaluations on the BraTS-Lighthouse-2025 Challenge dataset demonstrate the effectiveness of the proposed approach in synthesizing anatomically plausible and structurally consistent results. Code is available at https://github.com/BheeshmSharma/SLaM-DiMM-MICCAI-BraTS-Challenge-2025.

Uncertainty-Gated Deformable Network for Breast Tumor Segmentation in MR Images

Yue Zhang, Jiahua Dong, Chengtao Peng, Qiuli Wang, Dan Song, Guiduo Duan

arxiv logopreprintSep 19 2025
Accurate segmentation of breast tumors in magnetic resonance images (MRI) is essential for breast cancer diagnosis, yet existing methods face challenges in capturing irregular tumor shapes and effectively integrating local and global features. To address these limitations, we propose an uncertainty-gated deformable network to leverage the complementary information from CNN and Transformers. Specifically, we incorporates deformable feature modeling into both convolution and attention modules, enabling adaptive receptive fields for irregular tumor contours. We also design an Uncertainty-Gated Enhancing Module (U-GEM) to selectively exchange complementary features between CNN and Transformer based on pixel-wise uncertainty, enhancing both local and global representations. Additionally, a Boundary-sensitive Deep Supervision Loss is introduced to further improve tumor boundary delineation. Comprehensive experiments on two clinical breast MRI datasets demonstrate that our method achieves superior segmentation performance compared with state-of-the-art methods, highlighting its clinical potential for accurate breast tumor delineation.

Deep learning-based acceleration and denoising of 0.55T MRI for enhanced conspicuity of vestibular Schwannoma post contrast administration.

Hinsen M, Nagel A, Heiss R, May M, Wiesmueller M, Mathy C, Zeilinger M, Hornung J, Mueller S, Uder M, Kopp M

pubmed logopapersSep 19 2025
Deep-learning (DL) based MRI denoising techniques promise improved image quality and shorter examination times. This advancement is particularly beneficial for 0.55T MRI, where the inherently lower signal-to-noise (SNR) ratio can compromise image quality. Sufficient SNR is crucial for the reliable detection of vestibular schwannoma (VS). The objective of this study is to evaluate the VS conspicuity and acquisition time (TA) of 0.55T MRI examinations with contrast agents using a DL-denoising algorithm. From January 2024 to October 2024, we retrospectively included 30 patients with VS (9 women). We acquired a clinical reference protocol of the cerebellopontine angle containing a T1w fat-saturated (fs) axial (number of signal averages [NSA] 4) and a T1w Spectral Attenuated Inversion Recovery (SPAIR) coronal (NSA 2) sequence after contrast agent (CA) application without advanced DL-based denoising (w/o DL). We reconstructed the T1w fs CA sequence axial and the T1w SPAIR CA coronal with full DL-denoising mode without change of NSA, and secondly with 1 NSA for T1w fs CA axial and T1w SPAIR coronal (DL&1NSA). Each sequence was rated on a 5-point Likert scale (1: insufficient, 3: moderate, clinically sufficient; 5: perfect) for: overall image quality; VS conspicuity, and artifacts. Secondly, we analyzed the reliability of the size measurements. Two radiologists specializing in head and neck imaging performed the reading and measurements. The Wilcoxon Signed-Rank Test was used for non-parametric statistical comparison. The DL&4NSA axial/coronal study sequence achieved the highest overall IQ (median 4.9). The image quality (IQ) for DL&1NSA was higher (M: 4.0) than for the reference sequence (w/o DL; median 4.0 versus 3.5, each p < 0.01). Similarly, the VS conspicuity was best for DL&4NSA (M: 4.9), decreased for DL&1NSA (M: 4.1), and was lower but still sufficient for w/o DL (M: 3.7, each p < 0.01). The TA for the axial and coronal post-contrast sequences was 8:59 minutes for DL&4NSA and w/o DL and decreased to 3:24 minutes with DL&1NSA. This study underlines that advanced DL-based denoising techniques can reduce the examination time by more than half while simultaneously improving image quality.
Page 12 of 1601593 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.