Sort by:
Page 12 of 54537 results

Pushing the limits of cardiac MRI: deep-learning based real-time cine imaging in free breathing vs breath hold.

Klemenz AC, Watzke LM, Deyerberg KK, Böttcher B, Gorodezky M, Manzke M, Dalmer A, Lorbeer R, Weber MA, Meinel FG

pubmed logopapersAug 23 2025
To evaluate deep-learning (DL) based real-time cardiac cine sequences acquired in free breathing (FB) vs breath hold (BH). In this prospective single-centre cohort study, 56 healthy adult volunteers were investigated on a 1.5-T MRI scanner. A set of real-time cine sequences, including a short-axis stack, 2-, 3-, and 4-chamber views, was acquired in FB and with BH. A validated DL-based cine sequence acquired over three cardiac cycles served as the reference standard for volumetric results. Subjective image quality (sIQ) was rated by two blinded readers. Volumetric analysis of both ventricles was performed. sIQ was rated as good to excellent for FB real-time cine images, slightly inferior to BH real-time cine images (p < 0.0001). Overall acquisition time for one set of cine sequences was 50% shorter with FB (median 90 vs 180 s, p < 0.0001). There were significant differences between the real-time sequences and the reference in left ventricular (LV) end-diastolic volume, LV end-systolic volume, LV stroke volume and LV mass. Nevertheless, BH cine imaging showed excellent correlation with the reference standard, with an intra-class correlation coefficient (ICC) > 0.90 for all parameters except right ventricular ejection fraction (RV EF, ICC = 0.887). With FB cine imaging, correlation with the reference standard was good for LV ejection fraction (LV EF, ICC = 0.825) and RV EF (ICC = 0.824) and excellent (ICC > 0.90) for all other parameters. DL-based real-time cine imaging is feasible even in FB with good to excellent image quality and acceptable volumetric results in healthy volunteers. Question Conventional cardiac MR (CMR) cine imaging is challenged by arrhythmias and patients unable to hold their breath, since data is acquired over several heartbeats. Findings DL-based real-time cine imaging is feasible in FB with acceptable volumetric results and reduced acquisition time by 50% compared to real-time breath-hold sequences. Clinical relevance This study fits into the wider goal of increasing the availability of CMR by reducing the complexity, duration of the examination and improving patient comfort and making CMR available even for patients who are unable to hold their breath.

Spectral computed tomography thermometry for thermal ablation: applicability and needle artifact reduction.

Koetzier LR, Hendriks P, Heemskerk JWT, van der Werf NR, Selles M, van der Molen AJ, Smits MLJ, Goorden MC, Burgmans MC

pubmed logopapersAug 23 2025
Effective thermal ablation of liver tumors requires precise monitoring of the ablation zone. Computed tomography (CT) thermometry can non-invasively monitor lethal temperatures but suffers from metal artifacts caused by ablation equipment. This study assesses spectral CT thermometry's applicability during microwave ablation, comparing the reproducibility, precision, and accuracy of attenuation-based versus physical density-based thermometry. Furthermore, it identifies optimal metal artifact reduction (MAR) methods: O-MAR, deep learning-MAR, spectral CT, and combinations thereof. Four gel phantoms embedded with temperature sensors underwent a 10- minute, 60 W microwave ablation imaged by dual-layer spectral CT scanner in 23 scans over time. For each scan attenuation-based and physical density-based temperature maps were reconstructed. Attenuation-based and physical density-based thermometry models were tested for reproducibility over three repetitions; a fourth repetition focused on accuracy. MAR techniques were applied to one repetition to evaluate temperature precision in artifact-corrupted slices. The correlation between CT value and temperature was highly linear with an R-squared value exceeding 96 %. Model parameters for attenuation-based and physical density-based thermometry were -0.38 HU/°C and 0.00039 °C<sup>-1</sup>, with coefficients of variation of 2.3 % and 6.7 %, respectively. Physical density maps improved temperature precision in presence of needle artifacts by 73 % compared to attenuation images. O-MAR improved temperature precision with 49 % compared to no MAR. Attenuation-based thermometry yielded narrower Bland-Altman limits-of-agreement (-7.7 °C to 5.3 °C) than physical density-based thermometry. Spectral physical density-based CT thermometry at 150 keV, utilized alongside O-MAR, enhances temperature precision in presence of metal artifacts and achieves reproducible temperature measurements with high accuracy.

Motion-robust <math xmlns="http://www.w3.org/1998/Math/MathML"> <semantics> <mrow> <msubsup><mrow><mi>T</mi></mrow> <mrow><mn>2</mn></mrow> <mrow><mo>∗</mo></mrow> </msubsup> </mrow> <annotation>$$ {\mathrm{T}}_2^{\ast } $$</annotation></semantics> </math> quantification from low-resolution gradient echo brain MRI with physics-informed deep learning.

Eichhorn H, Spieker V, Hammernik K, Saks E, Felsner L, Weiss K, Preibisch C, Schnabel JA

pubmed logopapersAug 22 2025
<math xmlns="http://www.w3.org/1998/Math/MathML"><semantics><mrow><msubsup><mrow><mi>T</mi></mrow> <mrow><mn>2</mn></mrow> <mrow><mo>∗</mo></mrow> </msubsup> </mrow> <annotation>$$ {\mathrm{T}}_2^{\ast } $$</annotation></semantics> </math> quantification from gradient echo magnetic resonance imaging is particularly affected by subject motion due to its high sensitivity to magnetic field inhomogeneities, which are influenced by motion and might cause signal loss. Thus, motion correction is crucial to obtain high-quality <math xmlns="http://www.w3.org/1998/Math/MathML"> <semantics> <mrow> <msubsup><mrow><mi>T</mi></mrow> <mrow><mn>2</mn></mrow> <mrow><mo>∗</mo></mrow> </msubsup> </mrow> <annotation>$$ {\mathrm{T}}_2^{\ast } $$</annotation></semantics> </math> maps. We extend PHIMO, our previously introduced learning-based physics-informed motion correction method for low-resolution <math xmlns="http://www.w3.org/1998/Math/MathML"> <semantics> <mrow> <msubsup><mrow><mi>T</mi></mrow> <mrow><mn>2</mn></mrow> <mrow><mo>∗</mo></mrow> </msubsup> </mrow> <annotation>$$ {\mathrm{T}}_2^{\ast } $$</annotation></semantics> </math> mapping. Our extended version, PHIMO+, utilizes acquisition knowledge to enhance the reconstruction performance for challenging motion patterns and increase PHIMO's robustness to varying strengths of magnetic field inhomogeneities across the brain. We perform comprehensive evaluations regarding motion detection accuracy and image quality for data with simulated and real motion. PHIMO+ outperforms the learning-based baseline methods both qualitatively and quantitatively with respect to line detection and image quality. Moreover, PHIMO+ performs on par with a conventional state-of-the-art motion correction method for <math xmlns="http://www.w3.org/1998/Math/MathML"> <semantics> <mrow> <msubsup><mrow><mi>T</mi></mrow> <mrow><mn>2</mn></mrow> <mrow><mo>∗</mo></mrow> </msubsup> </mrow> <annotation>$$ {\mathrm{T}}_2^{\ast } $$</annotation></semantics> </math> quantification from gradient echo MRI, which relies on redundant data acquisition. PHIMO+'s competitive motion correction performance, combined with a reduction in acquisition time by over 40% compared to the state-of-the-art method, makes it a promising solution for motion-robust <math xmlns="http://www.w3.org/1998/Math/MathML"> <semantics> <mrow> <msubsup><mrow><mi>T</mi></mrow> <mrow><mn>2</mn></mrow> <mrow><mo>∗</mo></mrow> </msubsup> </mrow> <annotation>$$ {\mathrm{T}}_2^{\ast } $$</annotation></semantics> </math> quantification in research settings and clinical routine.

Towards Diagnostic Quality Flat-Panel Detector CT Imaging Using Diffusion Models

Hélène Corbaz, Anh Nguyen, Victor Schulze-Zachau, Paul Friedrich, Alicia Durrer, Florentin Bieder, Philippe C. Cattin, Marios N Psychogios

arxiv logopreprintAug 22 2025
Patients undergoing a mechanical thrombectomy procedure usually have a multi-detector CT (MDCT) scan before and after the intervention. The image quality of the flat panel detector CT (FDCT) present in the intervention room is generally much lower than that of a MDCT due to significant artifacts. However, using only FDCT images could improve patient management as the patient would not need to be moved to the MDCT room. Several studies have evaluated the potential use of FDCT imaging alone and the time that could be saved by acquiring the images before and/or after the intervention only with the FDCT. This study proposes using a denoising diffusion probabilistic model (DDPM) to improve the image quality of FDCT scans, making them comparable to MDCT scans. Clinicans evaluated FDCT, MDCT, and our model's predictions for diagnostic purposes using a questionnaire. The DDPM eliminated most artifacts and improved anatomical visibility without reducing bleeding detection, provided that the input FDCT image quality is not too low. Our code can be found on github.

Dedicated prostate DOI-TOF-PET based on the ProVision detection concept.

Vo HP, Williams T, Doroud K, Williams C, Rafecas M

pubmed logopapersAug 22 2025
The ProVision scanner is a dedicated prostate PET system with limited angular coverage; it employs a new detector technology that provides high spatial resolution as well as information about depth-of-interaction (DOI) and time-of-flight (TOF). The goal of this work is to develop a flexible image reconstruction framework and study the image performance of the current ProVision scanners.&#xD;Approach: Experimental datasets, including point-like sources, an image quality phantom, and a pelvic phantom, were acquired using the ProVision scanner to investigate the impact of oblique lines of response introduced via a multi-offset scanning protocol. This approach aims to mitigate data truncation artifacts and further characterise the current imaging performance of the system. For image reconstruction, we applied the list-mode Maximum Likelihood Expectation Maximisation algorithm incorporating TOF information. The system matrix and sensitivity models account for both detector attenuation and position uncertainty.&#xD;Main Results: The scanner provides good spatial resolution on the coronal plane; however, elongations caused by the limited angular coverage distort the reconstructed images. The availability of TOF and DOI information, as well as the addition of a multi-offset scanning protocol, could not fully compensate for these distortions.&#xD;Significance: The ProVision scanner concept, with innovative detector technology, shows promising outcomes for fast and inexpensive PET without CT. Despite current limitations due to limited angular coverage, which leads to image distortions, ongoing advancements, such as improved timing resolution, regularisation techniques, and artificial intelligence, are expected to significantly reduce these artifacts and enhance image quality.

Extrapolation Convolution for Data Prediction on a 2-D Grid: Bridging Spatial and Frequency Domains With Applications in Image Outpainting and Compressed Sensing.

Ibrahim V, Alaya Cheikh F, Asari VK, Paul JS

pubmed logopapersAug 22 2025
Extrapolation plays a critical role in machine/deep learning (ML/DL), enabling models to predict data points beyond their training constraints, particularly useful in scenarios deviating significantly from training conditions. This article addresses the limitations of current convolutional neural networks (CNNs) in extrapolation tasks within image restoration and compressed sensing (CS). While CNNs show potential in tasks such as image outpainting and CS, traditional convolutions are limited by their reliance on interpolation, failing to fully capture the dependencies needed for predicting values outside the known data. This work proposes an extrapolation convolution (EC) framework that models missing data prediction as an extrapolation problem using linear prediction within DL architectures. The approach is applied in two domains: first, image outpainting, where EC in encoder-decoder (EnDec) networks replaces conventional interpolation methods to reduce artifacts and enhance fine detail representation; second, Fourier-based CS-magnetic resonance imaging (CS-MRI), where it predicts high-frequency signal values from undersampled measurements in the frequency domain, improving reconstruction quality and preserving subtle structural details at high acceleration factors. Comparative experiments demonstrate that the proposed EC-DecNet and FDRN outperform traditional CNN-based models, achieving high-quality image reconstruction with finer details, as shown by improved peak signal-to-noise ratio (PSNR), structural similarity index (SSIM), and kernel inception distance (KID)/Frechet inception distance (FID) scores. Ablation studies and analysis highlight the effectiveness of larger kernel sizes and multilevel semi-supervised learning in FDRN for enhancing extrapolation accuracy in the frequency domain.

Towards Diagnostic Quality Flat-Panel Detector CT Imaging Using Diffusion Models

Hélène Corbaz, Anh Nguyen, Victor Schulze-Zachau, Paul Friedrich, Alicia Durrer, Florentin Bieder, Philippe C. Cattin, Marios N Psychogios

arxiv logopreprintAug 22 2025
Patients undergoing a mechanical thrombectomy procedure usually have a multi-detector CT (MDCT) scan before and after the intervention. The image quality of the flat panel detector CT (FDCT) present in the intervention room is generally much lower than that of a MDCT due to significant artifacts. However, using only FDCT images could improve patient management as the patient would not need to be moved to the MDCT room. Several studies have evaluated the potential use of FDCT imaging alone and the time that could be saved by acquiring the images before and/or after the intervention only with the FDCT. This study proposes using a denoising diffusion probabilistic model (DDPM) to improve the image quality of FDCT scans, making them comparable to MDCT scans. Clinicans evaluated FDCT, MDCT, and our model's predictions for diagnostic purposes using a questionnaire. The DDPM eliminated most artifacts and improved anatomical visibility without reducing bleeding detection, provided that the input FDCT image quality is not too low. Our code can be found on github.

Robust Deep Learning for Pulse-echo Speed of Sound Imaging via Time-shift Maps.

Chen H, Han A

pubmed logopapersAug 22 2025
Accurately imaging the spatial distribution of longitudinal speed of sound (SoS) has a profound impact on image quality and the diagnostic value of ultrasound. Knowledge of SoS distribution allows effective aberration correction to improve image quality. SoS imaging also provides a new contrast mechanism to facilitate disease diagnosis. However, SoS imaging is challenging in the pulse-echo mode. Deep learning (DL) is a promising approach for pulse-echo SoS imaging, which may yield more accurate results than pure physics-based approaches. Herein, we developed a robust DL approach for SoS imaging that learns the nonlinear mapping between measured time shifts and the underlying SoS without subjecting to the constraints of a specific forward model. Various strategies were adopted to enhance model performance. Time-shift maps were computed by adopting a common mid-angle configuration from the non-DL literature, normalizing complex beamformed ultrasound data, and accounting for depth-dependent frequency when converting phase shifts to time shifts. The structural similarity index measure (SSIM) was incorporated into the loss function to learn the global structure for SoS imaging. A two-stage training strategy was employed, leveraging computationally efficient ray-tracing synthesis for extensive pretraining, and more realistic but computationally expensive full-wave simulations for fine-tuning. Using these combined strategies, our model was shown to be robust and generalizable across different conditions. The simulation-trained model successfully reconstructed the SoS maps of phantoms using experimental data. Compared with the physics-based inversion approach, our method improved reconstruction accuracy and contrast-to-noise ratio in phantom experiments. These results demonstrated the accuracy and robustness of our approach.

Vision-Guided Surgical Navigation Using Computer Vision for Dynamic Intraoperative Imaging Updates.

Ruthberg J, Gunderson N, Chen P, Harris G, Case H, Bly R, Seibel EJ, Abuzeid WM

pubmed logopapersAug 22 2025
Residual disease after endoscopic sinus surgery (ESS) contributes to poor outcomes and revision surgery. Image-guided surgery systems cannot dynamically reflect intraoperative changes. We propose a sensorless, video-based method for intraoperative CT updating using neural radiance fields (NeRF), a deep learning algorithm used to create 3D surgical field reconstructions. Bilateral ESS was performed on three 3D-printed models (n = 6 sides). Postoperative endoscopic videos were processed through a custom NeRF pipeline to generate 3D reconstructions, which were co-registered to preoperative CT scans. Digitally updated CT models were created through algorithmic subtraction of resected regions, then volumetrically segmented, and compared to ground-truth postoperative CT. Accuracy was assessed using Hausdorff distance (surface alignment), Dice similarity coefficient (DSC) (volumetric overlap), and Bland‒Altman analysis (BAA) (statistical agreement). Comparison of the updated CT and the ground-truth postoperative CT indicated an average Hausdorff distance of 0.27 ± 0.076 mm and a 95th percentile Hausdorff distance of 0.82 ± 0.165 mm, indicating sub-millimeter surface alignment. The DSC was 0.93 ± 0.012 with values >0.9 suggestive of excellent spatial overlap. BAA indicated modest underestimation of volume on the updated CT versus ground-truth CT with a mean difference in volumes of 0.40 cm<sup>3</sup> with 95% limits of agreement of 0.04‒0.76 cm<sup>3</sup> indicating that all samples fell within acceptable bounds of variability. Computer vision can enable dynamic intraoperative imaging by generating highly accurate CT updates from monocular endoscopic video without external tracking. By directly visualizing resection progress, this software-driven tool has the potential to enhance surgical completeness in ESS for next-generation navigation platforms.

Zero-shot Volumetric CT Super-Resolution using 3D Gaussian Splatting with Upsampled 2D X-ray Projection Priors

Jeonghyun Noh, Hyun-Jic Oh, Byungju Chae, Won-Ki Jeong

arxiv logopreprintAug 21 2025
Computed tomography (CT) is widely used in clinical diagnosis, but acquiring high-resolution (HR) CT is limited by radiation exposure risks. Deep learning-based super-resolution (SR) methods have been studied to reconstruct HR from low-resolution (LR) inputs. While supervised SR approaches have shown promising results, they require large-scale paired LR-HR volume datasets that are often unavailable. In contrast, zero-shot methods alleviate the need for paired data by using only a single LR input, but typically struggle to recover fine anatomical details due to limited internal information. To overcome these, we propose a novel zero-shot 3D CT SR framework that leverages upsampled 2D X-ray projection priors generated by a diffusion model. Exploiting the abundance of HR 2D X-ray data, we train a diffusion model on large-scale 2D X-ray projection and introduce a per-projection adaptive sampling strategy. It selects the generative process for each projection, thus providing HR projections as strong external priors for 3D CT reconstruction. These projections serve as inputs to 3D Gaussian splatting for reconstructing a 3D CT volume. Furthermore, we propose negative alpha blending (NAB-GS) that allows negative values in Gaussian density representation. NAB-GS enables residual learning between LR and diffusion-based projections, thereby enhancing high-frequency structure reconstruction. Experiments on two datasets show that our method achieves superior quantitative and qualitative results for 3D CT SR.
Page 12 of 54537 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.