Sort by:
Page 13 of 18177 results

Deep Learning Image Reconstruction (DLIR) Algorithm to Maintain High Image Quality and Diagnostic Accuracy in Quadruple-low CT Angiography of Children with Pulmonary Sequestration: A Case Control Study.

Li H, Zhang Y, Hua S, Sun R, Zhang Y, Yang Z, Peng Y, Sun J

pubmed logopapersMay 22 2025
CT angiography (CTA) is a commonly used clinical examination to detect abnormal arteries and diagnose pulmonary sequestration (PS). Reducing the radiation dose, contrast medium dosage, and injection pressure in CTA, especially in children, has always been an important research topic, but few research is proven by pathology. The current study aimed to evaluate the diagnostic accuracy for children with PS in a quadruple-low CTA (4L-CTA: low tube voltage, radiation, contrast medium, and injection flow rate) using deep learning image reconstruction (DLIR) in comparison with routine protocol CTA with adaptive statistical iterative reconstruction-V (ASIR-V) MATERIALS AND METHODS: 53 patients (1.50±1.36years) suspected with PS were enrolled to undergo chest 4L-CTA using 70kVp tube voltage with radiation dose or 0.90 mGy in volumetric CT dose index (CTDIvol) and contrast medium dose of 0.8 ml/kg injected in 16 s. Images were reconstructed using DLIR. Another 53 patients (1.25±1.02years) with a routine dose protocol was used for comparison, and images were reconstructed with ASIR-V. The contrast-to-noise ratio (CNR) and edge-rise distance (ERD) of the aorta were calculated. The subjective overall image quality and artery visualization were evaluated using a 5-point scale (5, excellent; 3, acceptable). All patients underwent surgery after CT, the sensitivity and specificity for diagnosing PS were calculated. 4L-CTA reduced radiation dose by 51%, contrast dose by 47%, injection flow rate by 44% and injection pressure by 44% compared to the routine CTA (all p<0.05). Both groups had satisfactory subjective image quality and achieved 100% in both sensitivity and specificity for diagnosing PS. 4L-CTA had a reduced CNR (by 27%, p<0.05) but similar ERD, which reflects the image spatial resolution (p>0.05) compared to the routine CTA. 4L-CTA revealed small arteries with a diameter of 0.8 mm. DLIR ensures the realization of 4L-CTA in children with PS for significant radiation and contrast dose reduction, while maintaining image quality, visualization of small arteries, and high diagnostic accuracy.

Leveraging deep learning-based kernel conversion for more precise airway quantification on CT.

Choe J, Yun J, Kim MJ, Oh YJ, Bae S, Yu D, Seo JB, Lee SM, Lee HY

pubmed logopapersMay 22 2025
To evaluate the variability of fully automated airway quantitative CT (QCT) measures caused by different kernels and the effect of kernel conversion. This retrospective study included 96 patients who underwent non-enhanced chest CT at two centers. CT scans were reconstructed using four kernels (medium soft, medium sharp, sharp, very sharp) from three vendors. Kernel conversion targeting the medium soft kernel as reference was applied to sharp kernel images. Fully automated airway quantification was performed before and after conversion. The effects of kernel type and conversion on airway quantification were evaluated using analysis of variance, paired t-tests, and concordance correlation coefficient (CCC). Airway QCT measures (e.g., Pi10, wall thickness, wall area percentage, lumen diameter) decreased with sharper kernels (all, p < 0.001), with varying degrees of variability across variables and vendors. Kernel conversion substantially reduced variability between medium soft and sharp kernel images for vendors A (pooled CCC: 0.59 vs. 0.92) and B (0.40 vs. 0.91) and lung-dedicated sharp kernels of vendor C (0.26 vs. 0.71). However, it was ineffective for non-lung-dedicated sharp kernels of vendor C (0.81 vs. 0.43) and showed limited improvement in variability of QCT measures at the subsegmental level. Consistent airway segmentation and identical anatomic labeling improved subsegmental airway variability in theoretical tests. Deep learning-based kernel conversion reduced the measurement variability of airway QCT across various kernels and vendors but was less effective for non-lung-dedicated kernels and subsegmental airways. Consistent airway segmentation and precise anatomic labeling can further enhance reproducibility for reliable automated quantification. Question How do different CT reconstruction kernels affect the measurement variability of automated airway measurements, and can deep learning-based kernel conversion reduce this variability? Findings Kernel conversion improved measurement consistency across vendors for lung-dedicated kernels, but showed limited effectiveness for non-lung-dedicated kernels and subsegmental airways. Clinical relevance Understanding kernel-related variability in airway quantification and mitigating it through deep learning enables standardized analysis, but further refinements are needed for robust airway segmentation, particularly for improving measurement variability in subsegmental airways and specific kernels.

High-resolution deep learning reconstruction to improve the accuracy of CT fractional flow reserve.

Tomizawa N, Fan R, Fujimoto S, Nozaki YO, Kawaguchi YO, Takamura K, Hiki M, Aikawa T, Takahashi N, Okai I, Okazaki S, Kumamaru KK, Minamino T, Aoki S

pubmed logopapersMay 22 2025
This study aimed to compare the diagnostic performance of CT-derived fractional flow reserve (CT-FFR) using model-based iterative reconstruction (MBIR) and high-resolution deep learning reconstruction (HR-DLR) images to detect functionally significant stenosis with invasive FFR as the reference standard. This single-center retrospective study included 79 consecutive patients (mean age, 70 ± 11 [SD] years; 57 male) who underwent coronary CT angiography followed by invasive FFR between February 2022 and March 2024. CT-FFR was calculated using a mesh-free simulation. The cutoff for functionally significant stenosis was defined as FFR ≤ 0.80. CT-FFR was compared with MBIR and HR-DLR using receiver operating characteristic curve analysis. The mean invasive FFR value was 0.81 ± 0.09, and 46 of 98 vessels (47%) had FFR ≤ 0.80. The mean noise of HR-DLR was lower than that of MBIR (14.4 ± 1.7 vs 23.5 ± 3.1, p < 0.001). The area under the receiver operating characteristic curve for the diagnosis of functionally significant stenosis of HR-DLR (0.88; 95% CI: 0.80, 0.95) was higher than that of MBIR (0.76; 95% CI: 0.67, 0.86; p = 0.003). The diagnostic accuracy of HR-DLR (88%; 86 of 98 vessels; 95% CI: 80, 94) was higher than that of MBIR (70%; 69 of 98 vessels; 95% CI: 60, 79; p < 0.001). HR-DLR improves image quality and the diagnostic performance of CT-FFR for the diagnosis of functionally significant stenosis. Question The effect of HR-DLR on the diagnostic performance of CT-FFR has not been investigated. Findings HR-DLR improved the diagnostic performance of CT-FFR over MBIR for the diagnosis of functionally significant stenosis as assessed by invasive FFR. Clinical relevance HR-DLR would further enhance the clinical utility of CT-FFR in diagnosing the functional significance of coronary stenosis.

Cross-Scale Texture Supplementation for Reference-based Medical Image Super-Resolution.

Li Y, Hao W, Zeng H, Wang L, Xu J, Routray S, Jhaveri RH, Gadekallu TR

pubmed logopapersMay 22 2025
Magnetic Resonance Imaging (MRI) is a widely used medical imaging technique, but its resolution is often limited by acquisition time constraints, potentially compromising diagnostic accuracy. Reference-based Image Super-Resolution (RefSR) has shown promising performance in addressing such challenges by leveraging external high-resolution (HR) reference images to enhance the quality of low-resolution (LR) images. The core objective of RefSR is to accurately establish correspondences between the reference HR image and the LR images. In pursuit of this objective, this paper develops a Self-rectified Texture Supplementation network for RefSR (STS-SR) to enhance fine details in MRI images and support the expanding role of autonomous AI in healthcare. Our network comprises a texture-specified selfrectified feature transfer module and a cross-scale texture complementary network. The feature transfer module employs highfrequency filtering to facilitate the network concentrating on fine details. To better exploit the information from both the reference and LR images, our cross-scale texture complementary module incorporates the All-ViT and Swin Transformer layers to achieve feature aggregation at multiple scales, which enables high-quality image enhancement that is critical for autonomous AI systems in healthcare to make accurate decisions. Extensive experiments are performed across various benchmark datasets. The results validate the effectiveness of our method and demonstrate that the method produces state-of-the-art performance as compared to existing approaches. This advancement enables autonomous AI systems to utilize high-quality MRI images for more accurate diagnostics and reliable predictions.

Non-rigid Motion Correction for MRI Reconstruction via Coarse-To-Fine Diffusion Models

Frederic Wang, Jonathan I. Tamir

arxiv logopreprintMay 21 2025
Magnetic Resonance Imaging (MRI) is highly susceptible to motion artifacts due to the extended acquisition times required for k-space sampling. These artifacts can compromise diagnostic utility, particularly for dynamic imaging. We propose a novel alternating minimization framework that leverages a bespoke diffusion model to jointly reconstruct and correct non-rigid motion-corrupted k-space data. The diffusion model uses a coarse-to-fine denoising strategy to capture large overall motion and reconstruct the lower frequencies of the image first, providing a better inductive bias for motion estimation than that of standard diffusion models. We demonstrate the performance of our approach on both real-world cine cardiac MRI datasets and complex simulated rigid and non-rigid deformations, even when each motion state is undersampled by a factor of 64x. Additionally, our method is agnostic to sampling patterns, anatomical variations, and MRI scanning protocols, as long as some low frequency components are sampled during each motion state.

X-GRM: Large Gaussian Reconstruction Model for Sparse-view X-rays to Computed Tomography

Yifan Liu, Wuyang Li, Weihao Yu, Chenxin Li, Alexandre Alahi, Max Meng, Yixuan Yuan

arxiv logopreprintMay 21 2025
Computed Tomography serves as an indispensable tool in clinical workflows, providing non-invasive visualization of internal anatomical structures. Existing CT reconstruction works are limited to small-capacity model architecture and inflexible volume representation. In this work, we present X-GRM (X-ray Gaussian Reconstruction Model), a large feedforward model for reconstructing 3D CT volumes from sparse-view 2D X-ray projections. X-GRM employs a scalable transformer-based architecture to encode sparse-view X-ray inputs, where tokens from different views are integrated efficiently. Then, these tokens are decoded into a novel volume representation, named Voxel-based Gaussian Splatting (VoxGS), which enables efficient CT volume extraction and differentiable X-ray rendering. This combination of a high-capacity model and flexible volume representation, empowers our model to produce high-quality reconstructions from various testing inputs, including in-domain and out-domain X-ray projections. Our codes are available at: https://github.com/CUHK-AIM-Group/X-GRM.

Reconsider the Template Mesh in Deep Learning-based Mesh Reconstruction

Fengting Zhang, Boxu Liang, Qinghao Liu, Min Liu, Xiang Chen, Yaonan Wang

arxiv logopreprintMay 21 2025
Mesh reconstruction is a cornerstone process across various applications, including in-silico trials, digital twins, surgical planning, and navigation. Recent advancements in deep learning have notably enhanced mesh reconstruction speeds. Yet, traditional methods predominantly rely on deforming a standardised template mesh for individual subjects, which overlooks the unique anatomical variations between them, and may compromise the fidelity of the reconstructions. In this paper, we propose an adaptive-template-based mesh reconstruction network (ATMRN), which generates adaptive templates from the given images for the subsequent deformation, moving beyond the constraints of a singular, fixed template. Our approach, validated on cortical magnetic resonance (MR) images from the OASIS dataset, sets a new benchmark in voxel-to-cortex mesh reconstruction, achieving an average symmetric surface distance of 0.267mm across four cortical structures. Our proposed method is generic and can be easily transferred to other image modalities and anatomical structures.

X-GRM: Large Gaussian Reconstruction Model for Sparse-view X-rays to Computed Tomography

Yifan Liu, Wuyang Li, Weihao Yu, Chenxin Li, Alexandre Alahi, Max Meng, Yixuan Yuan

arxiv logopreprintMay 21 2025
Computed Tomography serves as an indispensable tool in clinical workflows, providing non-invasive visualization of internal anatomical structures. Existing CT reconstruction works are limited to small-capacity model architecture, inflexible volume representation, and small-scale training data. In this paper, we present X-GRM (X-ray Gaussian Reconstruction Model), a large feedforward model for reconstructing 3D CT from sparse-view 2D X-ray projections. X-GRM employs a scalable transformer-based architecture to encode an arbitrary number of sparse X-ray inputs, where tokens from different views are integrated efficiently. Then, tokens are decoded into a new volume representation, named Voxel-based Gaussian Splatting (VoxGS), which enables efficient CT volume extraction and differentiable X-ray rendering. To support the training of X-GRM, we collect ReconX-15K, a large-scale CT reconstruction dataset containing around 15,000 CT/X-ray pairs across diverse organs, including the chest, abdomen, pelvis, and tooth etc. This combination of a high-capacity model, flexible volume representation, and large-scale training data empowers our model to produce high-quality reconstructions from various testing inputs, including in-domain and out-domain X-ray projections. Project Page: https://github.com/CUHK-AIM-Group/X-GRM.

Blind Restoration of High-Resolution Ultrasound Video

Chu Chen, Kangning Cui, Pasquale Cascarano, Wei Tang, Elena Loli Piccolomini, Raymond H. Chan

arxiv logopreprintMay 20 2025
Ultrasound imaging is widely applied in clinical practice, yet ultrasound videos often suffer from low signal-to-noise ratios (SNR) and limited resolutions, posing challenges for diagnosis and analysis. Variations in equipment and acquisition settings can further exacerbate differences in data distribution and noise levels, reducing the generalizability of pre-trained models. This work presents a self-supervised ultrasound video super-resolution algorithm called Deep Ultrasound Prior (DUP). DUP employs a video-adaptive optimization process of a neural network that enhances the resolution of given ultrasound videos without requiring paired training data while simultaneously removing noise. Quantitative and visual evaluations demonstrate that DUP outperforms existing super-resolution algorithms, leading to substantial improvements for downstream applications.

Deep-Learning Reconstruction for 7T MP2RAGE and SPACE MRI: Improving Image Quality at High Acceleration Factors.

Liu Z, Patel V, Zhou X, Tao S, Yu T, Ma J, Nickel D, Liebig P, Westerhold EM, Mojahed H, Gupta V, Middlebrooks EH

pubmed logopapersMay 20 2025
Deep learning (DL) reconstruction has been successful in realizing otherwise impracticable acceleration factors and improving image quality in conventional MRI field strengths; however, there has been limited application to ultra-high field MRI.The objective of this study was to evaluate the performance of a prototype DL-based image reconstruction technique in 7T MRI of the brain utilizing MP2RAGE and SPACE acquisitions, in comparison to reconstructions in conventional compressed sensing (CS) and controlled aliasing in parallel imaging (CAIPIRINHA) techniques. This retrospective study involved 60 patients who underwent 7T brain MRI between June 2024 and October 2024, comprised of 30 patients with MP2RAGE data and 30 patients with SPACE FLAIR data. Each set of raw data was reconstructed with DL-based reconstruction and conventional reconstruction. Image quality was independently assessed by two neuroradiologists using a 5-point Likert scale, which included overall image quality, artifacts, sharpness, structural conspicuity, and noise level. Inter-observer agreement was determined using top-box analysis. Contrast-to-noise ratio (CNR) and noise levels were quantitatively evaluated and compared using the Wilcoxon signed-rank test. DL-based reconstruction resulted in a significant increase in overall image quality and a reduction in subjective noise level for both MP2RAGE and SPACE FLAIR data (all P<0.001), with no significant differences in image artifacts (all P>0.05). When compared to standard reconstruction, the implementation of DL-based reconstruction yielded an increase in CNR of 49.5% [95% CI 33.0-59.0%] for MP2RAGE data and 90.6% [95% CI 73.2-117.7%] for SPACE FLAIR data, along with a decrease in noise of 33.5% [95% CI 23.0-38.0%] for MP2RAGE data and 47.5% [95% CI 41.9-52.6%] for SPACE FLAIR data. DL-based reconstruction of 7T MRI significantly enhanced image quality compared to conventional reconstruction without introducing image artifacts. The achievable high acceleration factors have the potential to substantially improve image quality and resolution in 7T MRI. CAIPIRINHA = Controlled Aliasing In Parallel Imaging Results IN Higher Acceleration; CNR = contrast-to-noise ratio; CS = compressed sensing; DL = deep learning; MNI = Montreal Neurological Institute; MP2RAGE = Magnetization-Prepared 2 Rapid Acquisition Gradient Echoes; SPACE = Sampling Perfection with Application-Optimized Contrasts using Different Flip Angle Evolutions.
Page 13 of 18177 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.