Sort by:
Page 43 of 54537 results

Impact of deep learning reconstruction on radiation dose reduction and cancer risk in CT examinations: a real-world clinical analysis.

Kobayashi N, Nakaura T, Yoshida N, Nagayama Y, Kidoh M, Uetani H, Sakabe D, Kawamata Y, Funama Y, Tsutsumi T, Hirai T

pubmed logopapersJun 1 2025
The purpose of this study is to estimate the extent to which the implementation of deep learning reconstruction (DLR) may reduce the risk of radiation-induced cancer from CT examinations, utilizing real-world clinical data. We retrospectively analyzed scan data of adult patients who underwent body CT during two periods relative to DLR implementation at our facility: a 12-month pre-DLR phase (n = 5553) using hybrid iterative reconstruction and a 12-month post-DLR phase (n = 5494) with routine CT reconstruction transitioning to DLR. To ensure comparability between two groups, we employed propensity score matching 1:1 based on age, sex, and body mass index. Dose data were collected to estimate organ-specific equivalent doses and total effective doses. We assessed the average dose reduction post-DLR implementation and estimated the Lifetime Attributable Risk (LAR) for cancer per CT exam pre- and post-DLR implementation. The number of radiation-induced cancers before and after the implementation of DLR was also estimated. After propensity score matching, 5247 cases from each group were included in the final analysis. Post-DLR, the total effective body CT dose significantly decreased to 15.5 ± 10.3 mSv from 28.1 ± 14.0 mSv pre-DLR (p < 0.001), a 45% reduction. This dose reduction significantly lowered the radiation-induced cancer risk, especially among younger women, with the estimated annual cancer incidence from 0.247% pre-DLR to 0.130% post-DLR. The implementation of DLR has the possibility to reduce radiation dose by 45% and the risk of radiation-induced cancer from 0.247 to 0.130% as compared with the iterative reconstruction. Question Can implementing deep learning reconstruction (DLR) in routine CT scans significantly reduce radiation dose and the risk of radiation-induced cancer compared to hybrid iterative reconstruction? Findings DLR reduced the total effective body CT dose by 45% (from 28.1 ± 14.0 mSv to 15.5 ± 10.3 mSv) and decreased estimated cancer incidence from 0.247 to 0.130%. Clinical relevance Adopting DLR in clinical practice substantially lowers radiation exposure and cancer risk from CT exams, enhancing patient safety, especially for younger women, and underscores the importance of advanced imaging techniques.

Incorporating Radiologist Knowledge Into MRI Quality Metrics for Machine Learning Using Rank-Based Ratings.

Tang C, Eisenmenger LB, Rivera-Rivera L, Huo E, Junn JC, Kuner AD, Oechtering TH, Peret A, Starekova J, Johnson KM

pubmed logopapersJun 1 2025
Deep learning (DL) often requires an image quality metric; however, widely used metrics are not designed for medical images. To develop an image quality metric that is specific to MRI using radiologists image rankings and DL models. Retrospective. A total of 19,344 rankings on 2916 unique image pairs from the NYU fastMRI Initiative neuro database was used for the neural network-based image quality metrics training with an 80%/20% training/validation split and fivefold cross-validation. 1.5 T and 3 T T1, T1 postcontrast, T2, and FLuid Attenuated Inversion Recovery (FLAIR). Synthetically corrupted image pairs were ranked by radiologists (N = 7), with a subset also scoring images using a Likert scale (N = 2). DL models were trained to match rankings using two architectures (EfficientNet and IQ-Net) with and without reference image subtraction and compared to ranking based on mean squared error (MSE) and structural similarity (SSIM). Image quality assessing DL models were evaluated as alternatives to MSE and SSIM as optimization targets for DL denoising and reconstruction. Radiologists' agreement was assessed by a percentage metric and quadratic weighted Cohen's kappa. Ranking accuracies were compared using repeated measurements analysis of variance. Reconstruction models trained with IQ-Net score, MSE and SSIM were compared by paired t test. P < 0.05 was considered significant. Compared to direct Likert scoring, ranking produced a higher level of agreement between radiologists (70.4% vs. 25%). Image ranking was subjective with a high level of intraobserver agreement ( <math xmlns="http://www.w3.org/1998/Math/MathML"><mrow><mn>94.9</mn> <mo>%</mo> <mo>±</mo> <mn>2.4</mn> <mo>%</mo></mrow> </math> ) and lower interobserver agreement ( <math xmlns="http://www.w3.org/1998/Math/MathML"><mrow><mn>61.47</mn> <mo>%</mo> <mo>±</mo> <mn>5.51</mn> <mo>%</mo></mrow> </math> ). IQ-Net and EfficientNet accurately predicted rankings with a reference image ( <math xmlns="http://www.w3.org/1998/Math/MathML"><mrow><mn>75.2</mn> <mo>%</mo> <mo>±</mo> <mn>1.3</mn> <mo>%</mo></mrow> </math> and <math xmlns="http://www.w3.org/1998/Math/MathML"><mrow><mn>79.2</mn> <mo>%</mo> <mo>±</mo> <mn>1.7</mn> <mo>%</mo></mrow> </math> ). However, EfficientNet resulted in images with artifacts and high MSE when used in denoising tasks while IQ-Net optimized networks performed well for both denoising and reconstruction tasks. Image quality networks can be trained from image ranking and used to optimize DL tasks. 3 TECHNICAL EFFICACY: Stage 1.

Ultra-fast biparametric MRI in prostate cancer assessment: Diagnostic performance and image quality compared to conventional multiparametric MRI.

Pausch AM, Filleböck V, Elsner C, Rupp NJ, Eberli D, Hötker AM

pubmed logopapersJun 1 2025
To compare the diagnostic performance and image quality of a deep-learning-assisted ultra-fast biparametric MRI (bpMRI) with the conventional multiparametric MRI (mpMRI) for the diagnosis of clinically significant prostate cancer (csPCa). This prospective single-center study enrolled 123 biopsy-naïve patients undergoing conventional mpMRI and additionally ultra-fast bpMRI at 3 T between 06/2023-02/2024. Two radiologists (R1: 4 years and R2: 3 years of experience) independently assigned PI-RADS scores (PI-RADS v2.1) and assessed image quality (mPI-QUAL score) in two blinded study readouts. Weighted Cohen's Kappa (κ) was calculated to evaluate inter-reader agreement. Diagnostic performance was analyzed using clinical data and histopathological results from clinically indicated biopsies. Inter-reader agreement was good for both mpMRI (κ = 0.83) and ultra-fast bpMRI (κ = 0.87). Both readers demonstrated high sensitivity (≥94 %/≥91 %, R1/R2) and NPV (≥96 %/≥95 %) for csPCa detection using both protocols. The more experienced reader mostly showed notably higher specificity (≥77 %/≥53 %), PPV (≥62 %/≥45 %), and diagnostic accuracy (≥82 %/≥65 %) compared to the less experienced reader. There was no significant difference in the diagnostic performance of correctly identifying csPCa between both protocols (p > 0.05). The ultra-fast bpMRI protocol had significantly better image quality ratings (p < 0.001) and achieved a reduction in scan time of 80 % compared to conventional mpMRI. Deep-learning-assisted ultra-fast bpMRI protocols offer a promising alternative to conventional mpMRI for diagnosing csPCa in biopsy-naïve patients with comparable inter-reader agreement and diagnostic performance at superior image quality. However, reader experience remains essential for diagnostic performance.

Robust whole-body PET image denoising using 3D diffusion models: evaluation across various scanners, tracers, and dose levels.

Yu B, Ozdemir S, Dong Y, Shao W, Pan T, Shi K, Gong K

pubmed logopapersJun 1 2025
Whole-body PET imaging plays an essential role in cancer diagnosis and treatment but suffers from low image quality. Traditional deep learning-based denoising methods work well for a specific acquisition but are less effective in handling diverse PET protocols. In this study, we proposed and validated a 3D Denoising Diffusion Probabilistic Model (3D DDPM) as a robust and universal solution for whole-body PET image denoising. The proposed 3D DDPM gradually injected noise into the images during the forward diffusion phase, allowing the model to learn to reconstruct the clean data during the reverse diffusion process. A 3D convolutional network was trained using high-quality data from the Biograph Vision Quadra PET/CT scanner to generate the score function, enabling the model to capture accurate PET distribution information extracted from the total-body datasets. The trained 3D DDPM was evaluated on datasets from four scanners, four tracer types, and six dose levels representing a broad spectrum of clinical scenarios. The proposed 3D DDPM consistently outperformed 2D DDPM, 3D UNet, and 3D GAN, demonstrating its superior denoising performance across all tested conditions. Additionally, the model's uncertainty maps exhibited lower variance, reflecting its higher confidence in its outputs. The proposed 3D DDPM can effectively handle various clinical settings, including variations in dose levels, scanners, and tracers, establishing it as a promising foundational model for PET image denoising. The trained 3D DDPM model of this work can be utilized off the shelf by researchers as a whole-body PET image denoising solution. The code and model are available at https://github.com/Miche11eU/PET-Image-Denoising-Using-3D-Diffusion-Model .

Deep learning enabled near-isotropic CAIPIRINHA VIBE in the nephrogenic phase improves image quality and renal lesion conspicuity.

Tan Q, Miao J, Nitschke L, Nickel MD, Lerchbaumer MH, Penzkofer T, Hofbauer S, Peters R, Hamm B, Geisel D, Wagner M, Walter-Rittel TC

pubmed logopapersJun 1 2025
Deep learning (DL) accelerated controlled aliasing in parallel imaging results in higher acceleration (CAIPIRINHA)-volumetric interpolated breath-hold examination (VIBE), provides high spatial resolution T1-weighted imaging of the upper abdomen. We aimed to investigate whether DL-CAIPIRINHA-VIBE can improve image quality, vessel conspicuity, and lesion detectability compared to a standard CAIPIRINHA-VIBE in renal imaging at 3 Tesla. In this prospective study, 50 patients with 23 solid and 45 cystic renal lesions underwent MRI with clinical MR sequences, including standard CAIPIRINHA-VIBE and DL-CAIPIRINHA-VIBE sequences in the nephrographic phase at 3 Tesla. Two experienced radiologists independently evaluated both sequences and multiplanar reconstructions (MPR) of the sagittal and coronal planes for image quality with a Likert scale ranging from 1 to 5 (5 =best). Quantitative measurements including the size of the largest lesion and renal lesion contrast ratios were evaluated. DL-CAIPIRINHA-VIBE compared to standard CAIPIRINHA-VIBE showed significantly improved overall image quality, higher scores for renal border delineation, renal sinuses, vessels, adrenal glands, reduced motion artifacts and reduced perceived noise in nephrographic phase images (all p < 0.001). DL-CAIPIRINHA-VIBE with MPR showed superior lesion conspicuity and diagnostic confidence compared to standard CAIPIRINHA-VIBE. However, DL-CAIPIRINHA-VIBE presented a more synthetic appearance and more aliasing artifacts (p < 0.023). The mean size and signal intensity of renal lesions for DL-CAIPIRINHA-VIBE showed no significant differences compared to standard CAIPIRINHA-VIBE (p > 0.9). DL-CAIPIRINHA-VIBE is well suited for kidney imaging in the nephrographic phase, provides good image quality, improved delineation of anatomic structures and renal lesions.

Diffusion Models in Low-Level Vision: A Survey.

He C, Shen Y, Fang C, Xiao F, Tang L, Zhang Y, Zuo W, Guo Z, Li X

pubmed logopapersJun 1 2025
Deep generative models have gained considerable attention in low-level vision tasks due to their powerful generative capabilities. Among these, diffusion model-based approaches, which employ a forward diffusion process to degrade an image and a reverse denoising process for image generation, have become particularly prominent for producing high-quality, diverse samples with intricate texture details. Despite their widespread success in low-level vision, there remains a lack of a comprehensive, insightful survey that synthesizes and organizes the advances in diffusion model-based techniques. To address this gap, this paper presents the first comprehensive review focused on denoising diffusion models applied to low-level vision tasks, covering both theoretical and practical contributions. We outline three general diffusion modeling frameworks and explore their connections with other popular deep generative models, establishing a solid theoretical foundation for subsequent analysis. We then categorize diffusion models used in low-level vision tasks from multiple perspectives, considering both the underlying framework and the target application. Beyond natural image processing, we also summarize diffusion models applied to other low-level vision domains, including medical imaging, remote sensing, and video processing. Additionally, we provide an overview of widely used benchmarks and evaluation metrics in low-level vision tasks. Our review includes an extensive evaluation of diffusion model-based techniques across six representative tasks, with both quantitative and qualitative analysis. Finally, we highlight the limitations of current diffusion models and propose four promising directions for future research. This comprehensive review aims to foster a deeper understanding of the role of denoising diffusion models in low-level vision.

Does the deep learning-based iterative reconstruction affect the measuring accuracy of bone mineral density in low-dose chest CT?

Hao H, Tong J, Xu S, Wang J, Ding N, Liu Z, Zhao W, Huang X, Li Y, Jin C, Yang J

pubmed logopapersJun 1 2025
To investigate the impacts of a deep learning-based iterative reconstruction algorithm on image quality and measuring accuracy of bone mineral density (BMD) in low-dose chest CT. Phantom and patient studies were separately conducted in this study. The same low-dose protocol was used for phantoms and patients. All images were reconstructed with filtered back projection, hybrid iterative reconstruction (HIR) (KARL®, level of 3,5,7), and deep learning-based iterative reconstruction (artificial intelligence iterative reconstruction [AIIR], low, medium, and high strength). The noise power spectrum (NPS) and the task-based transfer function (TTF) were evaluated using phantom. The accuracy and the relative error (RE) of BMD were evaluated using a European spine phantom. The subjective evaluation was performed by 2 experienced radiologists. BMD was measured using quantitative CT (QCT). Image noise, signal-to-noise ratio (SNR), contrast-to-noise ratio (CNR), BMD values, and subjective scores were compared with Wilcoxon signed-rank test. The Cohen's kappa test was used to evaluate the inter-reader and inter-group agreement. AIIR reduced noise and improved resolution on phantom images significantly. There were no significant differences among BMD values in all groups of images (all P > 0.05). RE of BMD measured using AIIR images was smaller. In objective evaluation, all strengths of AIIR achieved less image noise and higher SNR and CNR (all P < 0.05). AIIR-H showed the lowest noise and highest SNR and CNR (P < 0.05). The increase in AIIR algorithm strengths did not affect BMD values significantly (all P > 0.05). The deep learning-based iterative reconstruction did not affect the accuracy of BMD measurement in low-dose chest CT while reducing image noise and improving spatial resolution. The BMD values could be measured accurately in low-dose chest CT with deep learning-based iterative reconstruction while reducing image noise and improving spatial resolution.

Fast aberration correction in 3D transcranial photoacoustic computed tomography via a learning-based image reconstruction method.

Huang HK, Kuo J, Zhang Y, Aborahama Y, Cui M, Sastry K, Park S, Villa U, Wang LV, Anastasio MA

pubmed logopapersJun 1 2025
Transcranial photoacoustic computed tomography (PACT) holds significant potential as a neuroimaging modality. However, compensating for skull-induced aberrations in reconstructed images remains a challenge. Although optimization-based image reconstruction methods (OBRMs) can account for the relevant wave physics, they are computationally demanding and generally require accurate estimates of the skull's viscoelastic parameters. To circumvent these issues, a learning-based image reconstruction method was investigated for three-dimensional (3D) transcranial PACT. The method was systematically assessed in virtual imaging studies that involved stochastic 3D numerical head phantoms and applied to experimental data acquired by use of a physical head phantom that involved a human skull. The results demonstrated that the learning-based method yielded accurate images and exhibited robustness to errors in the assumed skull properties, while substantially reducing computational times compared to an OBRM. To the best of our knowledge, this is the first demonstration of a learned image reconstruction method for 3D transcranial PACT.

Scatter and beam hardening effect corrections in pelvic region cone beam CT images using a convolutional neural network.

Yagi S, Usui K, Ogawa K

pubmed logopapersJun 1 2025
The aim of this study is to remove scattered photons and beam hardening effect in cone beam CT (CBCT) images and make an image available for treatment planning. To remove scattered photons and beam hardening effect, a convolutional neural network (CNN) was used, and trained with distorted projection data including scattered photons and beam hardening effect and supervised projection data calculated with monochromatic X-rays. The number of training projection data was 17,280 with data augmentation and that of test projection data was 540. The performance of the CNN was investigated in terms of the number of photons in the projection data used in the training of the network. Projection data of pelvic CBCT images (32 cases) were calculated with a Monte Carlo simulation with six different count levels ranging from 0.5 to 3 million counts/pixel. For the evaluation of corrected images, the peak signal-to-noise ratio (PSNR), the structural similarity index measure (SSIM), and the sum of absolute difference (SAD) were used. The results of simulations showed that the CNN could effectively remove scattered photons and beam hardening effect, and the PSNR, the SSIM, and the SAD significantly improved. It was also found that the number of photons in the training projection data was important in correction accuracy. Furthermore, a CNN model trained with projection data with a sufficient number of photons could yield good performance even though a small number of photons were used in the input projection data.

Generative adversarial networks in medical image reconstruction: A systematic literature review.

Hussain J, Båth M, Ivarsson J

pubmed logopapersJun 1 2025
Recent advancements in generative adversarial networks (GANs) have demonstrated substantial potential in medical image processing. Despite this progress, reconstructing images from incomplete data remains a challenge, impacting image quality. This systematic literature review explores the use of GANs in enhancing and reconstructing medical imaging data. A document survey of computing literature was conducted using the ACM Digital Library to identify relevant articles from journals and conference proceedings using keyword combinations, such as "generative adversarial networks or generative adversarial network," "medical image or medical imaging," and "image reconstruction." Across the reviewed articles, there were 122 datasets used in 175 instances, 89 top metrics employed 335 times, 10 different tasks with a total count of 173, 31 distinct organs featured in 119 instances, and 18 modalities utilized in 121 instances, collectively depicting significant utilization of GANs in medical imaging. The adaptability and efficacy of GANs were showcased across diverse medical tasks, organs, and modalities, utilizing top public as well as private/synthetic datasets for disease diagnosis, including the identification of conditions like cancer in different anatomical regions. The study emphasized GAN's increasing integration and adaptability in diverse radiology modalities, showcasing their transformative impact on diagnostic techniques, including cross-modality tasks. The intricate interplay between network size, batch size, and loss function refinement significantly impacts GAN's performance, although challenges in training persist. The study underscores GANs as dynamic tools shaping medical imaging, contributing significantly to image quality, training methodologies, and overall medical advancements, positioning them as substantial components driving medical advancements.
Page 43 of 54537 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.