Sort by:
Page 1 of 653 results

Advantages of deep learning reconstruction algorithm in ultra-high-resolution CT for the diagnosis of pancreatic cystic neoplasm.

Sofue K, Ueno Y, Yabe S, Ueshima E, Yamaguchi T, Masuda A, Sakai A, Toyama H, Fukumoto T, Hori M, Murakami T

pubmed logopapersMay 30 2025
This study aimed to evaluate the image quality and clinical utility of a deep learning reconstruction (DLR) algorithm in ultra-high-resolution computed tomography (UHR-CT) for the diagnosis of pancreatic cystic neoplasms (PCNs). This retrospective study included 45 patients with PCNs between March 2020 and February 2022. Contrast-enhanced UHR-CT images were obtained and reconstructed using DLR and hybrid iterative reconstruction (IR). Image noise and contrast-to-noise ratio (CNR) were measured. Two radiologists assessed the diagnostic performance of the imaging findings associated with PCNs using a 5-point Likert scale. The diagnostic performance metrics, including sensitivity, specificity, and area under the receiver operating characteristic curve (AUROC), were calculated. Quantitative and qualitative features were compared between CT with DLR and hybrid IR. Interobserver agreement for qualitative assessments was also analyzed. DLR significantly reduced image noise and increased CNR compared to hybrid IR for all objects (p < 0.001). Radiologists rated DLR images as superior in overall quality, lesion delineation, and vessel conspicuity (p < 0.001). DLR produced higher AUROC values for diagnostic imaging findings (ductal communication: 0.887‒0.938 vs. 0.816‒0.827 and enhanced mural nodule: 0.843‒0.916 vs. 0.785‒0.801), although DLR did not directly improve sensitivity, specificity, and accuracy. Interobserver agreement for qualitative assessments was higher in CT with DLR (κ = 0.69‒0.82 vs. 0.57‒0.73). DLR improved image quality and diagnostic performance by effectively reducing image noise and improving lesion conspicuity in the diagnosis of PCNs on UHR-CT. The DLR demonstrated greater diagnostic confidence for the assessment of imaging findings associated with PCNs.

Deep learning reconstruction improves computer-aided pulmonary nodule detection and measurement accuracy for ultra-low-dose chest CT.

Wang J, Zhu Z, Pan Z, Tan W, Han W, Zhou Z, Hu G, Ma Z, Xu Y, Ying Z, Sui X, Jin Z, Song L, Song W

pubmed logopapersMay 30 2025
To compare the image quality and pulmonary nodule detectability and measurement accuracy between deep learning reconstruction (DLR) and hybrid iterative reconstruction (HIR) of chest ultra-low-dose CT (ULDCT). Participants who underwent chest standard-dose CT (SDCT) followed by ULDCT from October 2020 to January 2022 were prospectively included. ULDCT images reconstructed with HIR and DLR were compared with SDCT images to evaluate image quality, nodule detection rate, and measurement accuracy using a commercially available deep learning-based nodule evaluation system. Wilcoxon signed-rank test was used to evaluate the percentage errors of nodule size and nodule volume between HIR and DLR images. Eighty-four participants (54 ± 13 years; 26 men) were finally enrolled. The effective radiation doses of ULDCT and SDCT were 0.16 ± 0.02 mSv and 1.77 ± 0.67 mSv, respectively (P < 0.001). The mean ± standard deviation of the lung tissue noises was 61.4 ± 3.0 HU for SDCT, 61.5 ± 2.8 HU and 55.1 ± 3.4 HU for ULDCT reconstructed with HIR-Strong setting (HIR-Str) and DLR-Strong setting (DLR-Str), respectively (P < 0.001). A total of 535 nodules were detected. The nodule detection rates of ULDCT HIR-Str and ULDCT DLR-Str were 74.0% and 83.4%, respectively (P < 0.001). The absolute percentage error in nodule volume from that of SDCT was 19.5% in ULDCT HIR-Str versus 17.9% in ULDCT DLR-Str (P < 0.001). Compared with HIR, DLR reduced image noise, increased nodule detection rate, and improved measurement accuracy of nodule volume at chest ULDCT. Not applicable.

GLIMPSE: Generalized Locality for Scalable and Robust CT.

Khorashadizadeh A, Debarnot V, Liu T, Dokmanic I

pubmed logopapersMay 30 2025
Deep learning has become the state-of-the-art approach to medical tomographic imaging. A common approach is to feed the result of a simple inversion, for example the backprojection, to a multiscale convolutional neural network (CNN) which computes the final reconstruction. Despite good results on in-distribution test data, this often results in overfitting certain large-scale structures and poor generalization on out-of-distribution (OOD) samples. Moreover, the memory and computational complexity of multiscale CNNs scale unfavorably with image resolution, making them impractical for application at realistic clinical resolutions. In this paper, we introduce GLIMPSE, a local coordinate-based neural network for computed tomography which reconstructs a pixel value by processing only the measurements associated with the neighborhood of the pixel. GLIMPSE significantly outperforms successful CNNs on OOD samples, while achieving comparable or better performance on in-distribution test data and maintaining a memory footprint almost independent of image resolution; 5GB memory suffices to train on 1024 × 1024 images which is orders of magnitude less than CNNs. GLIMPSE is fully differentiable and can be used plug-and-play in arbitrary deep learning architectures, enabling feats such as correcting miscalibrated projection orientations.

Deep learning enables fast and accurate quantification of MRI-guided near-infrared spectral tomography for breast cancer diagnosis.

Feng J, Tang Y, Lin S, Jiang S, Xu J, Zhang W, Geng M, Dang Y, Wei C, Li Z, Sun Z, Jia K, Pogue BW, Paulsen KD

pubmed logopapersMay 29 2025
The utilization of magnetic resonance (MR) im-aging to guide near-infrared spectral tomography (NIRST) shows significant potential for improving the specificity and sensitivity of breast cancer diagnosis. However, the ef-ficiency and accuracy of NIRST image reconstruction have been limited by the complexities of light propagation mod-eling and MRI image segmentation. To address these chal-lenges, we developed and evaluated a deep learning-based approach for MR-guided 3D NIRST image reconstruction (DL-MRg-NIRST). Using a network trained on synthetic data, the DL-MRg-NIRST system reconstructed images from data acquired during 38 clinical imaging exams of pa-tients with breast abnormalities. Statistical analysis of the results demonstrated a sensitivity of 87.5%, a specificity of 92.9%, and a diagnostic accuracy of 89.5% in distinguishing pathologically defined benign from malignant lesions. Ad-ditionally, the combined use of MRI and DL-MRg-NIRST di-agnoses achieved an area under the receiver operating characteristic (ROC) curve of 0.98. Remarkably, the DL-MRg-NIRST image reconstruction process required only 1.4 seconds, significantly faster than state-of-the-art MR-guided NIRST methods.

CT-denoimer: efficient contextual transformer network for low-dose CT denoising.

Zhang Y, Xu F, Zhang R, Guo Y, Wang H, Wei B, Ma F, Meng J, Liu J, Lu H, Chen Y

pubmed logopapersMay 29 2025
Low-dose computed tomography (LDCT) effectively reduces radiation exposure to patients, but introduces severe noise artifacts that affect diagnostic accuracy. Recently, Transformer-based network architectures have been widely applied to LDCT image denoising, generally achieving superior results compared to traditional convolutional methods. However, these methods are often hindered by high computational costs and struggles in capturing complex local contextual features, which negatively impact denoising performance. In this work, we propose CT-Denoimer, an efficient CT Denoising Transformer network that captures both global correlations and intricate, spatially varying local contextual details in CT images, enabling the generation of high-quality images. The core of our framework is a Transformer module that consists of two key components: the Multi-Dconv head Transposed Attention (MDTA) and the Mixed Contextual Feed-forward Network (MCFN). The MDTA block captures global correlations in the image with linear computational complexity, while the MCFN block manages multi-scale local contextual information, both static and dynamic, through a series of Enhanced Contextual Transformer (eCoT) modules. In addition, we incorporate Operation-Wise Attention Layers (OWALs) to enable collaborative refinement in the proposed CT-Denoimer, enhancing its ability to more effectively handle complex and varying noise patterns in LDCT images. Extensive experimental validation on both the AAPM-Mayo public dataset and a real-world clinical dataset demonstrated the state-of-the-art performance of the proposed CT-Denoimer. It achieved a peak signal-to-noise ratio (PSNR) of 33.681 dB, a structural similarity index measure (SSIM) of 0.921, an information fidelity criterion (IFC) of 2.857 and a visual information fidelity (VIF) of 0.349. Subjective assessment by radiologists gave an average score of 4.39, confirming its clinical applicability and clear advantages over existing methods. This study presents an innovative CT denoising Transformer network that sets a new benchmark in LDCT image denoising, excelling in both noise reduction and fine structure preservation.

Deep Learning CAIPIRINHA-VIBE Improves and Accelerates Head and Neck MRI.

Nitschke LV, Lerchbaumer M, Ulas T, Deppe D, Nickel D, Geisel D, Kubicka F, Wagner M, Walter-Rittel T

pubmed logopapersMay 29 2025
The aim of this study was to evaluate image quality for contrast-enhanced (CE) neck MRI with a deep learning-reconstructed VIBE sequence with acceleration factors (AF) 4 (DL4-VIBE) and 6 (DL6-VIBE). Patients referred for neck MRI were examined in a 3-Tesla scanner in this prospective, single-center study. Four CE fat-saturated (FS) VIBE sequences were acquired in each patient: Star-VIBE (4:01 min), VIBE (2:05 min), DL4-VIBE (0:24 min), DL6-VIBE (0:17 min). Image quality was evaluated by three radiologists with a 5-point Likert scale and included overall image quality, muscle contour delineation, conspicuity of mucosa and pharyngeal musculature, FS uniformity, and motion artifacts. Objective image quality was assessed with signal-to-noise ratio (SNR), contrast-to-noise ratio (CNR), and quantification of metal artifacts. 68 patients (60.3% male; mean age 57.4±16 years) were included in this study. DL4-VIBE was superior for overall image quality, delineation of muscle contours, differentiation of mucosa and pharyngeal musculature, vascular delineation, and motion artifacts. Notably, DL4-VIBE exhibited exceptional FS uniformity (p<0.001). SNR and CNR were superior for DL4-VIBE compared to all other sequences (p<0.001). Metal artifacts were least pronounced in the standard VIBE, followed by DL4-VIBE (p<0.001). Although DL6-VIBE was inferior to DL4-VIBE, it demonstrated improved FS homogeneity, delineation of pharyngeal mucosa, and CNR compared to Star-VIBE and VIBE. DL4-VIBE significantly improves image quality for CE neck MRI with a fraction of the scan time of conventional sequences.

Deep learning reconstruction for improved image quality of ultra-high-resolution brain CT angiography: application in moyamoya disease.

Ma Y, Nakajima S, Fushimi Y, Funaki T, Otani S, Takiya M, Matsuda A, Kozawa S, Fukushima Y, Okuchi S, Sakata A, Yamamoto T, Sakamoto R, Chihara H, Mineharu Y, Arakawa Y, Nakamoto Y

pubmed logopapersMay 29 2025
To investigate vessel delineation and image quality of ultra-high-resolution (UHR) CT angiography (CTA) reconstructed using deep learning reconstruction (DLR) optimised for brain CTA (DLR-brain) in moyamoya disease (MMD), compared with DLR optimised for body CT (DLR-body) and hybrid iterative reconstruction (Hybrid-IR). This retrospective study included 50 patients with suspected or diagnosed MMD who underwent UHR brain CTA. All images were reconstructed using DLR-brain, DLR-body, and Hybrid-IR. Quantitative analysis focussed on moyamoya perforator vessels in the basal ganglia and periventricular anastomosis. For these small vessels, edge sharpness, peak CT number, vessel contrast, full width at half maximum (FWHM), and image noise were measured and compared. Qualitative analysis was performed by visual assessment to compare vessel delineation and image quality. DLR-brain significantly improved edge sharpness, peak CT number, vessel contrast, and FWHM, and significantly reduced image noise compared with DLR-body and Hybrid-IR (P < 0.05). DLR-brain significantly outperformed the other algorithms in the visual assessment (P < 0.001). DLR-brain provided superior visualisation of small intracranial vessels compared with DLR-body and Hybrid-IR in UHR brain CTA.

Free-running isotropic three-dimensional cine magnetic resonance imaging with deep learning image reconstruction.

Erdem S, Erdem O, Stebbings S, Greil G, Hussain T, Zou Q

pubmed logopapersMay 29 2025
Cardiovascular magnetic resonance (CMR) cine imaging is the gold standard for assessing ventricular volumes and function. It typically requires two-dimensional (2D) bSSFP sequences and multiple breath-holds, which can be challenging for patients with limited breath-holding capacity. Three-dimensional (3D) cardiovascular magnetic resonance angiography (MRA) usually suffers from lengthy acquisition. Free-running 3D cine imaging with deep learning (DL) reconstruction offers a potential solution by acquiring both cine and angiography simultaneously. To evaluate the efficiency and accuracy of a ferumoxytol-enhanced 3D cine imaging MR sequence combined with DL reconstruction and Heart-NAV technology in patients with congenital heart disease. This Institutional Review Board approved this prospective study that compared (i) functional and volumetric measurements between 3 and 2D cine images; (ii) contrast-to-noise ratio (CNR) between deep-learning (DL) and compressed sensing (CS)-reconstructed 3D cine images; and (iii) cross-sectional area (CSA) measurements between DL-reconstructed 3D cine images and the clinical 3D MRA images acquired using the bSSFP sequence. Paired t-tests were used to compare group measurements, and Bland-Altman analysis assessed agreement in CSA and volumetric data. Sixteen patients (seven males; median age 6 years) were recruited. 3D cine imaging showed slightly larger right ventricular (RV) volumes and lower RV ejection fraction (EF) compared to 2D cine, with a significant difference only in RV end-systolic volume (P = 0.02). Left ventricular (LV) volumes and EF were slightly higher, and LV mass was lower, without significant differences (P ≥ 0.05). DL-reconstructed 3D cine images showed significantly higher CNR in all pulmonary veins than CS-reconstructed 3D cine images (all P < 0.05). Highly accelerated free-running 3D cine imaging with DL reconstruction shortens acquisition times and provides comparable volumetric measurements to 2D cine, and comparable CSA to clinical 3D MRA.

High-Quality CEST Mapping With Lorentzian-Model Informed Neural Representation.

Chen C, Liu Y, Park SW, Li J, Chan KWY, Huang J, Morel JM, Chan RH

pubmed logopapersMay 28 2025
Chemical Exchange Saturation Transfer (CEST) MRI has demonstrated its remarkable ability to enhance the detection of macromolecules and metabolites with low concentrations. While CEST mapping is essential for quantifying molecular information, conventional methods face critical limitations: model-based approaches are constrained by limited sensitivity and robustness depending heavily on parameter setups, while data-driven deep learning methods lack generalizability across heterogeneous datasets and acquisition protocols. To overcome these challenges, we propose a Lorentzian-model Informed Neural Representation (LINR) framework for high-quality CEST mapping. LINR employs a self-supervised neural architecture embedding the Lorentzian equation - the fundamental biophysical model of CEST signal evolution - to directly reconstruct high-sensitivity parameter maps from raw z-spectra, eliminating dependency on labeled training data. Convergence of the self-supervised training strategy is guaranteed theoretically, ensuring LINR's mathematical validity. The superior performance of LINR in capturing CEST contrasts is revealed through comprehensive evaluations based on synthetic phantoms and in-vivo experiments (including tumor and Alzheimer's disease models). The intuitive parameter-free design enables adaptive integration into diverse CEST imaging workflows, positioning LINR as a versatile tool for non-invasive molecular diagnostics and pathophysiological discovery.

Deep Separable Spatiotemporal Learning for Fast Dynamic Cardiac MRI.

Wang Z, Xiao M, Zhou Y, Wang C, Wu N, Li Y, Gong Y, Chang S, Chen Y, Zhu L, Zhou J, Cai C, Wang H, Jiang X, Guo D, Yang G, Qu X

pubmed logopapersMay 28 2025
Dynamic magnetic resonance imaging (MRI) plays an indispensable role in cardiac diagnosis. To enable fast imaging, the k-space data can be undersampled but the image reconstruction poses a great challenge of high-dimensional processing. This challenge necessitates extensive training data in deep learning reconstruction methods. In this work, we propose a novel and efficient approach, leveraging a dimension-reduced separable learning scheme that can perform exceptionally well even with highly limited training data. We design this new approach by incorporating spatiotemporal priors into the development of a Deep Separable Spatiotemporal Learning network (DeepSSL), which unrolls an iteration process of a 2D spatiotemporal reconstruction model with both temporal lowrankness and spatial sparsity. Intermediate outputs can also be visualized to provide insights into the network behavior and enhance interpretability. Extensive results on cardiac cine datasets demonstrate that the proposed DeepSSL surpasses stateof-the-art methods both visually and quantitatively, while reducing the demand for training cases by up to 75%. Additionally, its preliminary adaptability to unseen cardiac patients has been verified through a blind reader study conducted by experienced radiologists and cardiologists. Furthermore, DeepSSL enhances the accuracy of the downstream task of cardiac segmentation and exhibits robustness in prospectively undersampled real-time cardiac MRI. DeepSSL is efficient under highly limited training data and adaptive to patients and prospective undersampling. This approach holds promise in addressing the escalating demand for high-dimensional data reconstruction in MRI applications.
Page 1 of 653 results
Show
per page
Get Started

Upload your X-ray image and get interpretation.

Upload now →

Disclaimer: X-ray Interpreter's AI-generated results are for informational purposes only and not a substitute for professional medical advice. Always consult a healthcare professional for medical diagnosis and treatment.