Sort by:
Page 13 of 54537 results

Deep Learning-Enhanced Single Breath-Hold Abdominal MRI at 0.55 T-Technical Feasibility and Image Quality Assessment.

Seifert AC, Breit HC, Obmann MM, Korolenko A, Nickel MD, Fenchel M, Boll DT, Vosshenrich J

pubmed logopapersAug 21 2025
Inherently lower signal-to-noise ratios hamper the broad clinical use of low-field abdominal MRI. This study aimed to investigate the technical feasibility and image quality of deep learning (DL)-enhanced T2 HASTE and T1 VIBE-Dixon abdominal MRI at 0.55 T. From July 2024 to September 2024, healthy volunteers underwent conventional and DL-enhanced 0.55 T abdominal MRI, including conventional T2 HASTE, fat-suppressed T2 HASTE (HASTE FS), and T1 VIBE-Dixon acquisitions, and DL-enhanced single- (HASTE DL<sub>SBH</sub>) and multi-breath-hold HASTE (HASTE DL<sub>MBH</sub>), fat-suppressed single- (HASTE FS DL<sub>SBH</sub>) and multi-breath-hold HASTE (HASTE FS DL<sub>MBH</sub>), and T1 VIBE-Dixon (VIBE-Dixon<sub>DL</sub>) acquisitions. Three abdominal radiologists evaluated the scans for quality parameters and artifacts (Likert scale 1-5), and incidental findings. Interreader agreement and comparative analyses were conducted. 33 healthy volunteers (mean age: 30±4years) were evaluated. Image quality was better for single breath-hold DL-enhanced MRI (all P<0.001) with good or better interreader agreement (κ≥0.61), including T2 HASTE (HASTE DL<sub>SBH</sub>: 4 [IQR: 4-4] vs. HASTE: 3 [3-3]), T2 HASTE FS (4 [4-4] vs. 3 [3-3]), and T1 VIBE-Dixon (4 [4-5] vs. 4 [3-4]). Similarly, image noise and spatial resolution were better for DL-MRI scans (P<0.001). No quality differences were found between single- and multi-breath-hold HASTE DL or HASTE FS DL (both: 4 [4-4]; P>0.572). The number and size of incidental lesions were identical between techniques (16 lesions; mean diameter 8±5 mm; P=1.000). DL-based image reconstruction enables single breath-hold T2 HASTE and T1 VIBE-Dixon abdominal imaging at 0.55 T with better image quality than conventional MRI.

Temporal footprint reduction via neural network denoising in 177Lu radioligand therapy.

Nzatsi MC, Varmenot N, Sarrut D, Delpon G, Cherel M, Rousseau C, Ferrer L

pubmed logopapersAug 20 2025
Internal vectorised therapies, particularly with [177Lu]-labelled agents, are increasingly used for metastatic prostate cancer and neuroendocrine tumours. However, routine dosimetry for organs-at-risk and tumours remains limited due to the complexity and time requirements of current protocols. We developed a Generative Adversarial Network (GAN) to transform rapid 6 s SPECT projections into synthetic 30 s-equivalent projections. SPECT data from twenty patients and phantom acquisitions were collected at multiple time-points. The GAN accurately predicted 30 s projections, enabling estimation of time-integrated activities in kidneys and liver with maximum errors below 6 % and 1 %, respectively, compared to standard acquisitions. For tumours and phantom spheres, results were more variable. On phantom data, GAN-inferred reconstructions showed lower biases for spheres of 20, 8, and 1 mL (8.2 %, 6.9 %, and 21.7 %) compared to direct 6 s acquisitions (12.4 %, 20.4 %, and 24.0 %). However, in patient lesions, 37 segmented tumours showed higher median discrepancies in cumulated activity for the GAN (15.4 %) than for the 6 s approach (4.1 %). Our preliminary results indicate that the GAN can provide reliable dosimetry for organs-at-risk, but further optimisation is needed for small lesion quantification. This approach could reduce SPECT acquisition time from 45 to 9 min for standard three-bed studies, potentially facilitating wider adoption of dosimetry in nuclear medicine and addressing challenges related to toxicity and cumulative absorbed doses in personalised radiopharmaceutical therapy.

Review of GPU-based Monte Carlo simulation platforms for transmission and emission tomography in medicine.

Chi Y, Schubert KE, Badal A, Roncali E

pubmed logopapersAug 20 2025
Monte Carlo (MC) simulation remains the gold standard for modeling complex physical interactions in transmission and emission tomography, with GPU parallel computing offering unmatched computational performance and enabling practical, large-scale MC applications. In recent years, rapid advancements in both GPU technologies and tomography techniques have been observed. Harnessing emerging GPU capabilities to accelerate MC simulation and strengthen its role in supporting the rapid growth of medical tomography has become an important topic. To provide useful insights, we conducted a comprehensive review of state-of-the-art GPU-accelerated MC simulations in tomography, highlighting current achievements and underdeveloped areas.&#xD;&#xD;Approach: We reviewed key technical developments across major tomography modalities, including computed tomography (CT), cone-beam CT (CBCT), positron emission tomography, single-photon emission computed tomography, proton CT, emerging techniques, and hybrid modalities. We examined MC simulation methods and major CPU-based MC platforms that have historically supported medical imaging development, followed by a review of GPU acceleration strategies, hardware evolutions, and leading GPU-based MC simulation packages. Future development directions were also discussed.&#xD;&#xD;Main Results: Significant advancements have been achieved in both tomography and MC simulation technologies over the past half-century. The introduction of GPUs has enabled speedups often exceeding 100-1000 times over CPU implementations, providing essential support to the development of new imaging systems. Emerging GPU features like ray-tracing cores, tensor cores, and GPU-execution-friendly transport methods offer further opportunities for performance enhancement. &#xD;&#xD;Significance: GPU-based MC simulation is expected to remain essential in advancing medical emission and transmission tomography. With the emergence of new concepts such as training Machine Learning with synthetic data, Digital Twins for Healthcare, and Virtual Clinical Trials, improving hardware portability and modularizing GPU-based MC codes to adapt to these evolving simulation needs represent important future research directions. This review aims to provide useful insights for researchers, developers, and practitioners in relevant fields.

Potential and challenges of generative adversarial networks for super-resolution in 4D Flow MRI

Oliver Welin Odeback, Arivazhagan Geetha Balasubramanian, Jonas Schollenberger, Edward Ferdiand, Alistair A. Young, C. Alberto Figueroa, Susanne Schnell, Outi Tammisola, Ricardo Vinuesa, Tobias Granberg, Alexander Fyrdahl, David Marlevi

arxiv logopreprintAug 20 2025
4D Flow Magnetic Resonance Imaging (4D Flow MRI) enables non-invasive quantification of blood flow and hemodynamic parameters. However, its clinical application is limited by low spatial resolution and noise, particularly affecting near-wall velocity measurements. Machine learning-based super-resolution has shown promise in addressing these limitations, but challenges remain, not least in recovering near-wall velocities. Generative adversarial networks (GANs) offer a compelling solution, having demonstrated strong capabilities in restoring sharp boundaries in non-medical super-resolution tasks. Yet, their application in 4D Flow MRI remains unexplored, with implementation challenged by known issues such as training instability and non-convergence. In this study, we investigate GAN-based super-resolution in 4D Flow MRI. Training and validation were conducted using patient-specific cerebrovascular in-silico models, converted into synthetic images via an MR-true reconstruction pipeline. A dedicated GAN architecture was implemented and evaluated across three adversarial loss functions: Vanilla, Relativistic, and Wasserstein. Our results demonstrate that the proposed GAN improved near-wall velocity recovery compared to a non-adversarial reference (vNRMSE: 6.9% vs. 9.6%); however, that implementation specifics are critical for stable network training. While Vanilla and Relativistic GANs proved unstable compared to generator-only training (vNRMSE: 8.1% and 7.8% vs. 7.2%), a Wasserstein GAN demonstrated optimal stability and incremental improvement (vNRMSE: 6.9% vs. 7.2%). The Wasserstein GAN further outperformed the generator-only baseline at low SNR (vNRMSE: 8.7% vs. 10.7%). These findings highlight the potential of GAN-based super-resolution in enhancing 4D Flow MRI, particularly in challenging cerebrovascular regions, while emphasizing the need for careful selection of adversarial strategies.

Systematic Evaluation of Wavelet-Based Denoising for MRI Brain Images: Optimal Configurations and Performance Benchmarks

Asadullah Bin Rahman, Masud Ibn Afjal, Md. Abdulla Al Mamun

arxiv logopreprintAug 20 2025
Medical imaging modalities including magnetic resonance imaging (MRI), computed tomography (CT), and ultrasound are essential for accurate diagnosis and treatment planning in modern healthcare. However, noise contamination during image acquisition and processing frequently degrades image quality, obscuring critical diagnostic details and compromising clinical decision-making. Additionally, enhancement techniques such as histogram equalization may inadvertently amplify existing noise artifacts, including salt-and-pepper distortions. This study investigates wavelet transform-based denoising methods for effective noise mitigation in medical images, with the primary objective of identifying optimal combinations of threshold values, decomposition levels, and wavelet types to achieve superior denoising performance and enhanced diagnostic accuracy. Through systematic evaluation across various noise conditions, the research demonstrates that the bior6.8 biorthogonal wavelet with universal thresholding at decomposition levels 2-3 consistently achieves optimal denoising performance, providing significant noise reduction while preserving essential anatomical structures and diagnostic features critical for clinical applications.

From Slices to Structures: Unsupervised 3D Reconstruction of Female Pelvic Anatomy from Freehand Transvaginal Ultrasound

Max Krähenmann, Sergio Tascon-Morales, Fabian Laumer, Julia E. Vogt, Ece Ozkan

arxiv logopreprintAug 20 2025
Volumetric ultrasound has the potential to significantly improve diagnostic accuracy and clinical decision-making, yet its widespread adoption remains limited by dependence on specialized hardware and restrictive acquisition protocols. In this work, we present a novel unsupervised framework for reconstructing 3D anatomical structures from freehand 2D transvaginal ultrasound (TVS) sweeps, without requiring external tracking or learned pose estimators. Our method adapts the principles of Gaussian Splatting to the domain of ultrasound, introducing a slice-aware, differentiable rasterizer tailored to the unique physics and geometry of ultrasound imaging. We model anatomy as a collection of anisotropic 3D Gaussians and optimize their parameters directly from image-level supervision, leveraging sensorless probe motion estimation and domain-specific geometric priors. The result is a compact, flexible, and memory-efficient volumetric representation that captures anatomical detail with high spatial fidelity. This work demonstrates that accurate 3D reconstruction from 2D ultrasound images can be achieved through purely computational means, offering a scalable alternative to conventional 3D systems and enabling new opportunities for AI-assisted analysis and diagnosis.

CUTE-MRI: Conformalized Uncertainty-based framework for Time-adaptivE MRI

Paul Fischer, Jan Nikolas Morshuis, Thomas Küstner, Christian Baumgartner

arxiv logopreprintAug 20 2025
Magnetic Resonance Imaging (MRI) offers unparalleled soft-tissue contrast but is fundamentally limited by long acquisition times. While deep learning-based accelerated MRI can dramatically shorten scan times, the reconstruction from undersampled data introduces ambiguity resulting from an ill-posed problem with infinitely many possible solutions that propagates to downstream clinical tasks. This uncertainty is usually ignored during the acquisition process as acceleration factors are often fixed a priori, resulting in scans that are either unnecessarily long or of insufficient quality for a given clinical endpoint. This work introduces a dynamic, uncertainty-aware acquisition framework that adjusts scan time on a per-subject basis. Our method leverages a probabilistic reconstruction model to estimate image uncertainty, which is then propagated through a full analysis pipeline to a quantitative metric of interest (e.g., patellar cartilage volume or cardiac ejection fraction). We use conformal prediction to transform this uncertainty into a rigorous, calibrated confidence interval for the metric. During acquisition, the system iteratively samples k-space, updates the reconstruction, and evaluates the confidence interval. The scan terminates automatically once the uncertainty meets a user-predefined precision target. We validate our framework on both knee and cardiac MRI datasets. Our results demonstrate that this adaptive approach reduces scan times compared to fixed protocols while providing formal statistical guarantees on the precision of the final image. This framework moves beyond fixed acceleration factors, enabling patient-specific acquisitions that balance scan efficiency with diagnostic confidence, a critical step towards personalized and resource-efficient MRI.

Development and validation of 3D super-resolution convolutional neural network for <sup>18</sup>F-FDG-PET images.

Endo H, Hirata K, Magota K, Yoshimura T, Katoh C, Kudo K

pubmed logopapersAug 19 2025
Positron emission tomography (PET) is a valuable tool for cancer diagnosis but generally has a lower spatial resolution compared to computed tomography (CT) or magnetic resonance imaging (MRI). High-resolution PET scanners that use silicon photomultipliers and time-of-flight measurements are expensive. Therefore, cost-effective software-based super-resolution methods are required. This study proposes a novel approach for enhancing whole-body PET image resolution applying a 2.5-dimensional Super-Resolution Convolutional Neural Network (2.5D-SRCNN) combined with logarithmic transformation preprocessing. This method aims to improve image quality and maintain quantitative accuracy, particularly for standardized uptake value measurements, while addressing the challenges of providing a memory-efficient alternative to full three-dimensional processing and managing the wide dynamic range of tracer uptake in PET images. We analyzed data from 90 patients who underwent whole-body FDG-PET/CT examinations and reconstructed low-resolution slices with a voxel size of 4 × 4 × 4 mm and corresponding high-resolution (HR) slices with a voxel size of 2 × 2 × 2 mm. The proposed 2.5D-SRCNN model, based on the conventional 2D-SRCNN structure, incorporates information from adjacent slices to generate a high-resolution output. Logarithmic transformation of the voxel values was applied to manage the large dynamic range caused by physiological tracer accumulation in the bladder. Performance was assessed using the peak signal-to-noise ratio (PSNR) and structural similarity index (SSIM). The quantitative accuracy of standardized uptake values (SUV) was validated using a phantom study. The results demonstrated that the 2.5D-SRCNN with logarithmic transformation significantly outperformed the conventional 2D-SRCNN in terms of PSNR and SSIM (<i>p</i> < 0.0001). The proposed method also showed an improved depiction of small spheres in the phantom while maintaining the accuracy of the SUV. Our proposed method for whole-body PET images using a super-resolution model with the 2.5D approach and logarithmic transformation may be effective in generating super-resolution images with a lower spatial error and better quantitative accuracy. The online version contains supplementary material available at 10.1186/s40658-025-00791-y.

Latent Interpolation Learning Using Diffusion Models for Cardiac Volume Reconstruction

Niklas Bubeck, Suprosanna Shit, Chen Chen, Can Zhao, Pengfei Guo, Dong Yang, Georg Zitzlsberger, Daguang Xu, Bernhard Kainz, Daniel Rueckert, Jiazhen Pan

arxiv logopreprintAug 19 2025
Cardiac Magnetic Resonance (CMR) imaging is a critical tool for diagnosing and managing cardiovascular disease, yet its utility is often limited by the sparse acquisition of 2D short-axis slices, resulting in incomplete volumetric information. Accurate 3D reconstruction from these sparse slices is essential for comprehensive cardiac assessment, but existing methods face challenges, including reliance on predefined interpolation schemes (e.g., linear or spherical), computational inefficiency, and dependence on additional semantic inputs such as segmentation labels or motion data. To address these limitations, we propose a novel \textbf{Ca}rdiac \textbf{L}atent \textbf{I}nterpolation \textbf{D}iffusion (CaLID) framework that introduces three key innovations. First, we present a data-driven interpolation scheme based on diffusion models, which can capture complex, non-linear relationships between sparse slices and improves reconstruction accuracy. Second, we design a computationally efficient method that operates in the latent space and speeds up 3D whole-heart upsampling time by a factor of 24, reducing computational overhead compared to previous methods. Third, with only sparse 2D CMR images as input, our method achieves SOTA performance against baseline methods, eliminating the need for auxiliary input such as morphological guidance, thus simplifying workflows. We further extend our method to 2D+T data, enabling the effective modeling of spatiotemporal dynamics and ensuring temporal coherence. Extensive volumetric evaluations and downstream segmentation tasks demonstrate that CaLID achieves superior reconstruction quality and efficiency. By addressing the fundamental limitations of existing approaches, our framework advances the state of the art for spatio and spatiotemporal whole-heart reconstruction, offering a robust and clinically practical solution for cardiovascular imaging.

Improving Deep Learning for Accelerated MRI With Data Filtering

Kang Lin, Anselm Krainovic, Kun Wang, Reinhard Heckel

arxiv logopreprintAug 19 2025
Deep neural networks achieve state-of-the-art results for accelerated MRI reconstruction. Most research on deep learning based imaging focuses on improving neural network architectures trained and evaluated on fixed and homogeneous training and evaluation data. In this work, we investigate data curation strategies for improving MRI reconstruction. We assemble a large dataset of raw k-space data from 18 public sources consisting of 1.1M images and construct a diverse evaluation set comprising 48 test sets, capturing variations in anatomy, contrast, number of coils, and other key factors. We propose and study different data filtering strategies to enhance performance of current state-of-the-art neural networks for accelerated MRI reconstruction. Our experiments show that filtering the training data leads to consistent, albeit modest, performance gains. These performance gains are robust across different training set sizes and accelerations, and we find that filtering is particularly beneficial when the proportion of in-distribution data in the unfiltered training set is low.
Page 13 of 54537 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.