Sort by:
Page 29 of 55541 results

Accelerated Multi-b-Value DWI Using Deep Learning Reconstruction: Image Quality Improvement and Microvascular Invasion Prediction in BCLC Stage A Hepatocellular Carcinoma.

Zhu Y, Wang P, Wang B, Feng B, Cai W, Wang S, Meng X, Wang S, Zhao X, Ma X

pubmed logopapersJul 1 2025
To investigate the effect of accelerated deep-learning (DL) multi-b-value DWI (Mb-DWI) on acquisition time, image quality, and predictive ability of microvascular invasion (MVI) in BCLC stage A hepatocellular carcinoma (HCC), compared to standard Mb-DWI. Patients who underwent liver MRI were prospectively collected. Subjective image quality, signal-to-noise ratio (SNR), lesion contrast-to-noise ratio (CNR), and Mb-DWI-derived parameters from various models (mono-exponential model, intravoxel incoherent motion, diffusion kurtosis imaging, and stretched exponential model) were calculated and compared between the two sequences. The Mb-DWI parameters of two sequences were compared between MVI-positive and MVI-negative groups, respectively. ROC and logistic regression analysis were performed to evaluate and identify the predictive performance. The study included 118 patients. 48/118 (40.67%) lesions were identified as MVI positive. DL Mb-DWI significantly reduced acquisition time by 52.86%. DL Mb-DWI produced significantly higher overall image quality, SNR, and CNR than standard Mb-DWI. All diffusion-related parameters except pseudo-diffusion coefficient showed significant differences between the two sequences. Both in DL and standard Mb-DWI, the apparent diffusion coefficient, true diffusion coefficient (D), perfusion fraction (f), mean diffusivity (MD), mean kurtosis (MK), and distributed diffusion coefficient (DDC) values were significantly different between MVI-positive and MVI-negative groups. The combination of D, f, and MK yield the highest AUC of 0.912 and 0.928 in standard and DL sequences, with no significant difference regarding the predictive efficiency. The DL Mb-DWI significantly reduces acquisition time and improves image quality, with comparable predictive performance to standard Mb-DWI in discriminating MVI status in BCLC stage A HCC.

Deep learning image enhancement algorithms in PET/CT imaging: a phantom and sarcoma patient radiomic evaluation.

Bonney LM, Kalisvaart GM, van Velden FHP, Bradley KM, Hassan AB, Grootjans W, McGowan DR

pubmed logopapersJul 1 2025
PET/CT imaging data contains a wealth of quantitative information that can provide valuable contributions to characterising tumours. A growing body of work focuses on the use of deep-learning (DL) techniques for denoising PET data. These models are clinically evaluated prior to use, however, quantitative image assessment provides potential for further evaluation. This work uses radiomic features to compare two manufacturer deep-learning (DL) image enhancement algorithms, one of which has been commercialised, against 'gold-standard' image reconstruction techniques in phantom data and a sarcoma patient data set (N=20). All studies in the retrospective sarcoma clinical [ <math xmlns="http://www.w3.org/1998/Math/MathML"><mmultiscripts><mrow></mrow> <mrow></mrow> <mn>18</mn></mmultiscripts> </math> F]FDG dataset were acquired on either a GE Discovery 690 or 710 PET/CT scanner with volumes segmented by an experienced nuclear medicine radiologist. The modular heterogeneous imaging phantom used in this work was filled with [ <math xmlns="http://www.w3.org/1998/Math/MathML"><mmultiscripts><mrow></mrow> <mrow></mrow> <mn>18</mn></mmultiscripts> </math> F]FDG, and five repeat acquisitions of the phantom were acquired on a GE Discovery 710 PET/CT scanner. The DL-enhanced images were compared to 'gold-standard' images the algorithms were trained to emulate and input images. The difference between image sets was tested for significance in 93 international biomarker standardisation initiative (IBSI) standardised radiomic features. Comparing DL-enhanced images to the 'gold-standard', 4.0% and 9.7% radiomic features measured significantly different (p<sub>critical</sub> < 0.0005) in the phantom and patient data respectively (averaged over the two DL algorithms). Larger differences were observed comparing DL-enhanced images to algorithm input images with 29.8% and 43.0% of radiomic features measuring significantly different in the phantom and patient data respectively (averaged over the two DL algorithms). DL-enhanced images were found to be similar to images generated using the 'gold-standard' target image reconstruction method with more than 80% of radiomic features not significantly different in all comparisons across unseen phantom and sarcoma patient data. This result offers insight into the performance of the DL algorithms, and demonstrate potential applications for DL algorithms in harmonisation for radiomics and for radiomic features in quantitative evaluation of DL algorithms.

Multi-scale geometric transformer for sparse-view X-ray 3D foot reconstruction.

Wang W, An L, Han G

pubmed logopapersJul 1 2025
Sparse-View X-ray 3D Foot Reconstruction aims to reconstruct the three-dimensional structure of the foot from sparse-view X-ray images, a challenging task due to data sparsity and limited viewpoints. This paper presents a novel method using a multi-scale geometric Transformer to enhance reconstruction accuracy and detail representation. Geometric position encoding technology and a window mechanism are introduced to divide X-ray images into local areas, finely capturing local features. A multi-scale Transformer module based on Neural Radiance Fields (NeRF) enhances the model's ability to express and capture details in complex structures. An adaptive weight learning strategy further optimizes the Transformer's feature extraction and long-range dependency modelling. Experimental results demonstrate that the proposed method significantly improves the reconstruction accuracy and detail preservation of the foot structure under sparse-view X-ray conditions. The multi-scale geometric Transformer effectively captures local and global features, leading to more accurate and detailed 3D reconstructions. The proposed method advances medical image reconstruction, significantly improving the accuracy and detail preservation of 3D foot reconstructions from sparse-view X-ray images.

Mind the Detail: Uncovering Clinically Relevant Image Details in Accelerated MRI with Semantically Diverse Reconstructions

Jan Nikolas Morshuis, Christian Schlarmann, Thomas Küstner, Christian F. Baumgartner, Matthias Hein

arxiv logopreprintJul 1 2025
In recent years, accelerated MRI reconstruction based on deep learning has led to significant improvements in image quality with impressive results for high acceleration factors. However, from a clinical perspective image quality is only secondary; much more important is that all clinically relevant information is preserved in the reconstruction from heavily undersampled data. In this paper, we show that existing techniques, even when considering resampling for diffusion-based reconstruction, can fail to reconstruct small and rare pathologies, thus leading to potentially wrong diagnosis decisions (false negatives). To uncover the potentially missing clinical information we propose ``Semantically Diverse Reconstructions'' (\SDR), a method which, given an original reconstruction, generates novel reconstructions with enhanced semantic variability while all of them are fully consistent with the measured data. To evaluate \SDR automatically we train an object detector on the fastMRI+ dataset. We show that \SDR significantly reduces the chance of false-negative diagnoses (higher recall) and improves mean average precision compared to the original reconstructions. The code is available on https://github.com/NikolasMorshuis/SDR

CT-free attenuation and Monte-Carlo based scatter correction-guided quantitative <sup>90</sup>Y-SPECT imaging for improved dose calculation using deep learning.

Mansouri Z, Salimi Y, Wolf NB, Mainta I, Zaidi H

pubmed logopapersJul 1 2025
This work aimed to develop deep learning (DL) models for CT-free attenuation and Monte Carlo-based scatter correction (AC, SC) in quantitative <sup>90</sup>Y SPECT imaging for improved dose calculation. Data of 190 patients who underwent <sup>90</sup>Y selective internal radiation therapy (SIRT) with glass microspheres was studied. Voxel-level dosimetry was performed on uncorrected and corrected SPECT images using the local energy deposition method. Three deep learning models were trained individually for AC, SC, and joint ASC using a modified 3D shifted-window UNet Transformer (Swin UNETR) architecture. Corrected and unorrected dose maps served as reference and as inputs, respectively. The data was split into train set (~ 80%) and unseen test set (~ 20%). Training was conducted in a five-fold cross-validation scheme. The trained models were tested on the unseen test set. The model's performance was thoroughly evaluated by comparing organ- and voxel-level dosimetry results between the reference and DL-generated dose maps on the unseen test dataset. The voxel and organ-level evaluations also included Gamma analysis with three different distances to agreement (DTA (mm)) and dose difference (DD (%)) criteria to explore suitable criteria in SIRT dosimetry using SPECT. The average ± SD of the voxel-level quantitative metrics for AC task, are mean error (ME (Gy)): -0.026 ± 0.06, structural similarity index (SSIM (%)): 99.5 ± 0.25, and peak signal to noise ratio (PSNR (dB)): 47.28 ± 3.31. These values for SC task are - 0.014 ± 0.05, 99.88 ± 0.099, 55.9 ± 4, respectively. For ASC task, these values are as follows: -0.04 ± 0.06, 99.57 ± 0.33, 47.97 ± 3.6, respectively. The results of voxel level gamma evaluations with three different criteria, namely "DTA: 4.79, DD: 1%", "DTA:10 mm, DD: 5%", and "DTA: 15 mm, DD:10%" were around 98%. The mean absolute error (MAE (Gy)) for tumor and whole normal liver across tasks are as follows: 7.22 ± 5.9 and 1.09 ± 0.86 for AC, 8 ± 9.3 and 0.9 ± 0.8 for SC, and 11.8 ± 12.02 and 1.3 ± 0.98 for ASC, respectively. We developed multiple models for three different clinically scenarios, namely AC, SC, and ASC using the patient-specific Monte Carlo scatter corrected and CT-based attenuation corrected images. These task-specific models could be beneficial to perform the essential corrections where the CT images are either not available or not reliable due to misalignment, after training with a larger dataset.

Accelerating brain T2-weighted imaging using artificial intelligence-assisted compressed sensing combined with deep learning-based reconstruction: a feasibility study at 5.0T MRI.

Wen Y, Ma H, Xiang S, Feng Z, Guan C, Li X

pubmed logopapersJul 1 2025
T2-weighted imaging (T2WI), renowned for its sensitivity to edema and lesions, faces clinical limitations due to prolonged scanning time, increasing patient discomfort, and motion artifacts. The individual applications of artificial intelligence-assisted compressed sensing (ACS) and deep learning-based reconstruction (DLR) technologies have demonstrated effectiveness in accelerated scanning. However, the synergistic potential of ACS combined with DLR at 5.0T remains unexplored. This study systematically evaluates the diagnostic efficacy of the integrated ACS-DLR technique for T2WI at 5.0T, comparing it to conventional parallel imaging (PI) protocols. The prospective analysis was performed on 98 participants who underwent brain T2WI scans using ACS, DLR, and PI techniques. Two observers evaluated the overall image quality, truncation artifacts, motion artifacts, cerebrospinal fluid flow artifacts, vascular pulsation artifacts, and the significance of lesions. Subjective rating differences among the three sequences were compared. Objective assessment involved the signal-to-noise ratio (SNR) and contrast-to-noise ratio (CNR) in gray matter, white matter, and cerebrospinal fluid for each sequence. The SNR, CNR, and acquisition time of each sequence were compared. The acquisition time for ACS and DLR was reduced by 78%. The overall image quality of DLR is higher than that of ACS (P < 0.001) and equivalent to PI (P > 0.05). The SNR of the DLR sequence is the highest, and the CNR of DLR is higher than that of the ACS sequence (P < 0.001) and equivalent to PI (P > 0.05). The integration of ACS and DLR enables the ultrafast acquisition of brain T2WI while maintaining superior SNR and comparable CNR compared to PI sequences. Not applicable.

Preoperative MRI-based deep learning reconstruction and classification model for assessing rectal cancer.

Yuan Y, Ren S, Lu H, Chen F, Xiang L, Chamberlain R, Shao C, Lu J, Shen F, Chen L

pubmed logopapersJul 1 2025
To determine whether deep learning reconstruction (DLR) could improve the image quality of rectal MR images, and to explore the discrimination of the TN stage of rectal cancer by different readers and deep learning classification models, compared with conventional MR images without DLR. Images of high-resolution T2-weighted, diffusion-weighted imaging (DWI), and contrast-enhanced T1-weighted imaging (CE-T1WI) from patients with pathologically diagnosed rectal cancer were retrospectively processed with and without DLR and assessed by five readers. The first two readers measured the signal-to-noise ratio (SNR) and contrast-to-noise ratio (CNR) of the lesions. The overall image quality and lesion display performance for each sequence with and without DLR were independently scored using a five-point scale, and the TN stage of rectal cancer lesions was evaluated by the other three readers. Fifty of the patients were randomly selected to further make a comparison between DLR and traditional denoising filter. Deep learning classification models were developed and compared for the TN stage. Receiver operating characteristic (ROC) curve analysis and decision curve analysis (DCA) were used to evaluate the diagnostic performance of the proposed model. Overall, 178 patients were evaluated. The SNR and CNR of the lesion on images with DLR were significantly higher than those without DLR, for T2WI, DWI and CE-T1WI, respectively (p < 0.0001). A significant difference was observed in overall image quality and lesion display performance between images with and without DLR (p < 0.0001). The image quality scores, SNR, and CNR values of DLR image set were significantly larger than those of original and filter enhancement image sets (all p values < 0.05) for all the three sequences, respectively. The deep learning classification models with DLR achieved good discrimination of the TN stage, with area under the curve (AUC) values of 0.937 (95% CI 0.839-0.977) and 0.824 (95% CI 0.684-0.913) in the test sets, respectively. Deep learning reconstruction and classification models could improve the image quality of rectal MRI images and enhance the diagnostic performance for determining the TN stage of patients with rectal cancer.

CZT-based photon-counting-detector CT with deep-learning reconstruction: image quality and diagnostic confidence for lung tumor assessment.

Sasaki T, Kuno H, Nomura K, Muramatsu Y, Aokage K, Samejima J, Taki T, Goto E, Wakabayashi M, Furuya H, Taguchi H, Kobayashi T

pubmed logopapersJul 1 2025
This is a preliminary analysis of one of the secondary endpoints in the prospective study cohort. The aim of this study is to assess the image quality and diagnostic confidence for lung cancer of CT images generated by using cadmium-zinc-telluride (CZT)-based photon-counting-detector-CT (PCD-CT) and comparing these super-high-resolution (SHR) images with conventional normal-resolution (NR) CT images. Twenty-five patients (median age 75 years, interquartile range 66-78 years, 18 men and 7 women) with 29 lung nodules overall (including two patients with 4 and 2 nodules, respectively) were enrolled to undergo PCD-CT. Three types of images were reconstructed: a 512 × 512 matrix with adaptive iterative dose reduction 3D (AIDR 3D) as the NR<sub>AIDR3D</sub> image, a 1024 × 1024 matrix with AIDR 3D as the SHR<sub>AIDR3D</sub> image, and a 1024 × 1024 matrix with deep-learning reconstruction (DLR) as the SHR<sub>DLR</sub> image. For qualitative analysis, two radiologists evaluated the matched reconstructed series twice (NR<sub>AIDR3D</sub> vs. SHR<sub>AIDR3D</sub> and SHR<sub>AIDR3D</sub> vs. SHR<sub>DLR</sub>) and scored the presence of imaging findings, such as spiculation, lobulation, appearance of ground-glass opacity or air bronchiologram, image quality, and diagnostic confidence, using a 5-point Likert scale. For quantitative analysis, contrast-to-noise ratios (CNRs) of the three images were compared. In the qualitative analysis, compared to NR<sub>AIDR3D</sub>, SHR<sub>AIDR3D</sub> yielded higher image quality and diagnostic confidence, except for image noise (all P < 0.01). In comparison with SHR<sub>AIDR3D</sub>, SHR<sub>DLR</sub> yielded higher image quality and diagnostic confidence (all P < 0.01). In the quantitative analysis, CNRs in the modified NR<sub>AIDR3D</sub> and SHR<sub>DLR</sub> groups were higher than those in the SHR<sub>AIDR3D</sub> group (P = 0.003, <0.001, respectively). In PCD-CT, SHR<sub>DLR</sub> images provided the highest image quality and diagnostic confidence for lung tumor evaluation, followed by SHR<sub>AIDR3D</sub> and NR<sub>AIDR3D</sub> images. DLR demonstrated superior noise reduction compared to other reconstruction methods.

Deep Guess acceleration for explainable image reconstruction in sparse-view CT.

Loli Piccolomini E, Evangelista D, Morotti E

pubmed logopapersJul 1 2025
Sparse-view Computed Tomography (CT) is an emerging protocol designed to reduce X-ray dose radiation in medical imaging. Reconstructions based on the traditional Filtered Back Projection algorithm suffer from severe artifacts due to sparse data. In contrast, Model-Based Iterative Reconstruction (MBIR) algorithms, though better at mitigating noise through regularization, are too computationally costly for clinical use. This paper introduces a novel technique, denoted as the Deep Guess acceleration scheme, using a trained neural network both to quicken the regularized MBIR and to enhance the reconstruction accuracy. We integrate state-of-the-art deep learning tools to initialize a clever starting guess for a proximal algorithm solving a non-convex model and thus computing a (mathematically) interpretable solution image in a few iterations. Experimental results on real and synthetic CT images demonstrate the Deep Guess effectiveness in (very) sparse tomographic protocols, where it overcomes its mere variational counterpart and many data-driven approaches at the state of the art. We also consider a ground truth-free implementation and test the robustness of the proposed framework to noise.

Phantom-based evaluation of image quality in Transformer-enhanced 2048-matrix CT imaging at low and ultralow doses.

Li Q, Liu L, Zhang Y, Zhang L, Wang L, Pan Z, Xu M, Zhang S, Xie X

pubmed logopapersJul 1 2025
To compare the quality of standard 512-matrix, standard 1024-matrix, and Swin2SR-based 2048-matrix phantom images under different scanning protocols. The Catphan 600 phantom was scanned using a multidetector CT scanner under two protocols: 120 kV/100 mA (CT dose index volume = 3.4 mGy) to simulate low-dose CT, and 70 kV/40 mA (0.27 mGy) to simulate ultralow-dose CT. Raw data were reconstructed into standard 512-matrix images using three methods: filtered back projection (FBP), adaptive statistical iterative reconstruction at 40% intensity (ASIR-V), and deep learning image reconstruction at high intensity (DLIR-H). The Swin2SR super-resolution model was used to generate 2048-matrix images (Swin2SR-2048), while the super-resolution convolutional neural network (SRCNN) model generated 2048-matrix images (SRCNN-2048). The quality of 2048-matrix images generated by the two models (Swin2SR and SRCNN) was compared. Image quality was evaluated by ImQuest software (v7.2.0.0, Duke University) based on line pair clarity, task-based transfer function (TTF), image noise, and noise power spectrum (NPS). At equivalent radiation doses and reconstruction method, Swin2SR-2048 images identified more line pairs than both standard-512 and standard-1024 images. Except for the 0.27 mGy/DLIR-H/standard kernel sequence, TTF-50% of Teflon increased after super-resolution processing. Statistically significant differences in TTF-50% were observed between the standard 512, 1024, and Swin2SR-2048 images (all p < 0.05). Swin2SR-2048 images exhibited lower image noise and NPS<sub>peak</sub> compared to both standard 512- and 1024-matrix images, with significant differences observed in all three matrix types (all p < 0.05). Swin2SR-2048 images also demonstrated superior quality compared to SRCNN-2048, with significant differences in image noise (p < 0.001), NPS<sub>peak</sub> (p < 0.05), and TTF-50% for Teflon (p < 0.05). Transformer-enhanced 2048-matrix CT images improve spatial resolution and reduce image noise compared to standard-512 and -1024 matrix images.
Page 29 of 55541 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.