Sort by:
Page 28 of 55541 results

Fat-water MRI separation using deep complex convolution network.

Ganeshkumar M, Kandasamy D, Sharma R, Mehndiratta A

pubmed logopapersJul 3 2025
Deep complex convolutional networks (DCCNs) utilize complex-valued convolutions and can process complex-valued MRI signals directly without splitting them into two real-valued magnitude and phase components. The performance of DCCN and real-valued U-Net is thoroughly investigated in the physics-informed subject-specific ad-hoc reconstruction method for fat-water separation and is compared against a widely used reference approach. A comprehensive test dataset (n = 33) was used for performance analysis. The 2012 ISMRM fat-water separation workshop dataset containing 28 batches of multi-echo MRIs with 3-15 echoes from the abdomen, thigh, knee, and phantoms, acquired with 1.5 T and 3 T scanners were used. Additionally, five MAFLD patients multi-echo MRIs acquired from our clinical radiology department were also used. The quantitative results demonstrated that DCCN produced fat-water maps with better normalized RMS error and structural similarity index with the reference approach, compared to real-valued U-Nets in the ad-hoc reconstruction method for fat-water separation. The DCCN achieved an overall average SSIM of 0.847 ± 0.069 and 0.861 ± 0.078 in generating fat and water maps, respectively, in contrast the U-Net achieved only 0.653 ± 0.166 and 0.729 ± 0.134. The average liver PDFF from DCCN achieved a correlation coefficient R of 0.847 with the reference approach.

3D Heart Reconstruction from Sparse Pose-agnostic 2D Echocardiographic Slices

Zhurong Chen, Jinhua Chen, Wei Zhuo, Wufeng Xue, Dong Ni

arxiv logopreprintJul 3 2025
Echocardiography (echo) plays an indispensable role in the clinical practice of heart diseases. However, ultrasound imaging typically provides only two-dimensional (2D) cross-sectional images from a few specific views, making it challenging to interpret and inaccurate for estimation of clinical parameters like the volume of left ventricle (LV). 3D ultrasound imaging provides an alternative for 3D quantification, but is still limited by the low spatial and temporal resolution and the highly demanding manual delineation. To address these challenges, we propose an innovative framework for reconstructing personalized 3D heart anatomy from 2D echo slices that are frequently used in clinical practice. Specifically, a novel 3D reconstruction pipeline is designed, which alternatively optimizes between the 3D pose estimation of these 2D slices and the 3D integration of these slices using an implicit neural network, progressively transforming a prior 3D heart shape into a personalized 3D heart model. We validate the method with two datasets. When six planes are used, the reconstructed 3D heart can lead to a significant improvement for LV volume estimation over the bi-plane method (error in percent: 1.98\% VS. 20.24\%). In addition, the whole reconstruction framework makes even an important breakthrough that can estimate RV volume from 2D echo slices (with an error of 5.75\% ). This study provides a new way for personalized 3D structure and function analysis from cardiac ultrasound and is of great potential in clinical practice.

De-speckling of medical ultrasound image using metric-optimized knowledge distillation.

Khalifa M, Hamza HM, Hosny KM

pubmed logopapersJul 3 2025
Ultrasound imaging provides real-time views of internal organs, which are essential for accurate diagnosis and treatment. However, speckle noise, caused by wave interactions with tissues, creates a grainy texture that hides crucial details. This noise varies with image intensity, which limits the effectiveness of traditional denoising methods. We introduce the Metric-Optimized Knowledge Distillation (MK) model, a deep-learning approach that utilizes Knowledge Distillation (KD) for denoising ultrasound images. Our method transfers knowledge from a high-performing teacher network to a smaller student network designed for this task. By leveraging KD, the model removes speckle noise while preserving key anatomical details needed for accurate diagnosis. A key innovation of our paper is the metric-guided training strategy. We achieve this by repeatedly computing evaluation metrics used to assess our model. Incorporating them into the loss function enables the model to reduce noise and enhance image quality optimally. We evaluate our proposed method against state-of-the-art despeckling techniques, including DNCNN and other recent models. The results demonstrate that our approach performs superior noise reduction and image quality preservation, making it a valuable tool for enhancing the diagnostic utility of ultrasound images.

ComptoNet: a Compton-map guided deep learning framework for multi-scatter estimation in multi-source stationary CT.

Xia Y, Zhang L, Xing Y, Chen Z, Gao H

pubmed logopapersJul 3 2025
Multi-source stationary computed tomography (MSS-CT) offers significant advantages in medical and industrial applications due to its gantryless scan architecture and capability of simultaneous multi-source emission. However, the lack of anti-scatter grid deployment in MSS-CT leads to severe forward and cross scatter contamination, necessitating accurate and efficient scatter correction. In this work, we propose ComptoNet, an innovative decoupled deep learning framework that integrates Compton-scattering physics with deep learning for scatter estimation in MSS-CT. The core innovation lies in the Compton-map, a representation of large-angle Compton scatter signals outside the scan field of view. ComptoNet employs a dual-network architecture: a Conditional Encoder-Decoder Network (CED-Net) guided by reference Compton-maps and spare detector data for cross scatter estimation, and a Frequency U-Net with attention mechanisms for forward scatter correction. Experiments on Monte Carlo-simulated data demonstrate ComptoNet's superior performance, achieving a mean absolute percentage error (MAPE) of $0.84\%$ on scatter estimation. After correction, CT images show nearly artifact-free quality, validating ComptoNet's robustness in mitigating scatter-induced errors across diverse photon counts and phantoms.

CT-Mamba: A hybrid convolutional State Space Model for low-dose CT denoising.

Li L, Wei W, Yang L, Zhang W, Dong J, Liu Y, Huang H, Zhao W

pubmed logopapersJul 3 2025
Low-dose CT (LDCT) significantly reduces the radiation dose received by patients, however, dose reduction introduces additional noise and artifacts. Currently, denoising methods based on convolutional neural networks (CNNs) face limitations in long-range modeling capabilities, while Transformer-based denoising methods, although capable of powerful long-range modeling, suffer from high computational complexity. Furthermore, the denoised images predicted by deep learning-based techniques inevitably exhibit differences in noise distribution compared to normal-dose CT (NDCT) images, which can also impact the final image quality and diagnostic outcomes. This paper proposes CT-Mamba, a hybrid convolutional State Space Model for LDCT image denoising. The model combines the local feature extraction advantages of CNNs with Mamba's strength in capturing long-range dependencies, enabling it to capture both local details and global context. Additionally, we introduce an innovative spatially coherent Z-shaped scanning scheme to ensure spatial continuity between adjacent pixels in the image. We design a Mamba-driven deep noise power spectrum (NPS) loss function to guide model training, ensuring that the noise texture of the denoised LDCT images closely resembles that of NDCT images, thereby enhancing overall image quality and diagnostic value. Experimental results have demonstrated that CT-Mamba performs excellently in reducing noise in LDCT images, enhancing detail preservation, and optimizing noise texture distribution, and exhibits higher statistical similarity with the radiomics features of NDCT images. The proposed CT-Mamba demonstrates outstanding performance in LDCT denoising and holds promise as a representative approach for applying the Mamba framework to LDCT denoising tasks.

Clinical value of the 70-kVp ultra-low-dose CT pulmonary angiography with deep learning image reconstruction.

Zhang Y, Wang L, Yuan D, Qi K, Zhang M, Zhang W, Gao J, Liu J

pubmed logopapersJul 2 2025
This study aims to assess the feasibility of "double-low," low radiation dosage and low contrast media dosage, CT pulmonary angiography (CTPA) based on deep-learning image reconstruction (DLIR) algorithms. One hundred consecutive patients (41 females; average age 60.9 years, range 18-90) were prospectively scanned on multi-detector CT systems. Fifty patients in the conventional-dose group (CD group) underwent CTPA with 100 kV protocol using the traditional iterative reconstruction algorithm, and 50 patients in the low-dose group (LD group) underwent CTPA with a 70 kVp DLIR protocol. Radiation and contrast agent doses were recorded and compared between groups. Objective parameters were measured and compared. Two radiologists evaluated images for overall image quality, artifacts, and image contrast separately on a 5-point scale. The furthest visible branches were compared between groups. Compared to the control group, the study group reduced the dose-length product by 80.3% (p < 0.01) and the contrast media dose by 33.3%. CT values, SD values, signal-to-noise ratio (SNR), and contrast-to-noise ratio (CNR) showed no statistically significant differences (all p > 0.05) between the LD and CD groups. The overall image quality scores were comparable between the LD and CD groups (p > 0.05), with good in-reader agreement (k = 0.75). More peripheral pulmonary vessels could be assessed in the LD group compared with the CD group. 70 kVp combined with DLIR reconstruction for CTPA can further reduce radiation and contrast agent dose while maintaining image quality and increasing the visibility on the pulmonary artery distal branches. Question Elevated radiation exposure and substantial doses of contrast media during CT pulmonary angiography (CTPA) augment patient risks. Findings The "double-low" CT pulmonary angiography protocol can diminish radiation doses by 80.3% and minimize contrast doses by one-third while maintaining image quality. Clinical relevance With deep learning algorithms, we confirmed that CTPA images maintained excellent quality despite reduced radiation and contrast dosages, helping to reduce radiation exposure and kidney burden on patients. The "double-low" CTPA protocol, complemented by deep learning image reconstruction, prioritizes patient safety.

Hybrid deep learning architecture for scalable and high-quality image compression.

Al-Khafaji M, Ramaha NTA

pubmed logopapersJul 2 2025
The rapid growth of medical imaging data presents challenges for efficient storage and transmission, particularly in clinical and telemedicine applications where image fidelity is crucial. This study proposes a hybrid deep learning-based image compression framework that integrates Stationary Wavelet Transform (SWT), Stacked Denoising Autoencoder (SDAE), Gray-Level Co-occurrence Matrix (GLCM), and K-means clustering. The framework enables multiresolution decomposition, texture-aware feature extraction, and adaptive region-based compression. A custom loss function that combines Mean Squared Error (MSE) and Structural Similarity Index (SSIM) ensures high perceptual quality and compression efficiency. The proposed model was evaluated across multiple benchmark medical imaging datasets and achieved a Peak Signal-to-Noise Ratio (PSNR) of up to 50.36 dB, MS-SSIM of 0.9999, and an encoding-decoding time of 0.065 s. These results demonstrate the model's capability to outperform existing approaches while maintaining diagnostic integrity, scalability, and speed, making it suitable for real-time and resource-constrained clinical environments.

Neural networks with personalized training for improved MOLLI T<sub>1</sub> mapping.

Gkatsoni O, Xanthis CG, Johansson S, Heiberg E, Arheden H, Aletras AH

pubmed logopapersJul 1 2025
The aim of this study was to develop a method for personalized training of Deep Neural Networks by means of an MRI simulator to improve MOLLI native T<sub>1</sub> estimates relative to conventional fitting methods. The proposed Personalized Training Neural Network (PTNN) for T<sub>1</sub> mapping was based on a neural network which was trained with simulated MOLLI signals generated for each individual scan, taking into account both the pulse sequence parameters and the heart rate triggers of the specific healthy volunteer. Experimental data from eleven phantoms and ten healthy volunteers were included in the study. In phantom studies, agreement between T<sub>1</sub> reference values and those obtained with the PTNN yielded a statistically significant smaller bias than conventional fitting estimates (-26.69 ± 29.5ms vs. -65.0 ± 33.25ms, p < 0.001). For in vivo studies, T<sub>1</sub> estimates derived from the PTNN yielded higher T<sub>1</sub> values (1152.4 ± 25.8ms myocardium, 1640.7 ± 30.6ms blood) than conventional fitting (1050.8 ± 24.7ms myocardium, 1597.2 ± 39.9ms blood). For PTNN, shortening the acquisition time by eliminating the pause between inversion pulses yielded higher myocardial T<sub>1</sub> values (1162.2 ± 19.7ms with pause vs. 1127.1 ± 19.7ms, p = 0.01 myocardium), (1624.7 ± 33.9ms with pause vs. 1645.4 ± 18.7ms, p = 0.16 blood). For conventional fitting statistically significant differences were found. Compared to T<sub>1</sub> maps derived by conventional fitting, PTNN is a post-processing method that yielded T<sub>1</sub> maps with higher values and better accuracy in phantoms for a physiological range of T<sub>1</sub> and T<sub>2</sub> values. In normal volunteers PTNN yielded higher T<sub>1</sub> values even with a shorter acquisition scheme of eight heartbeats scan time, without deploying new pulse sequences.

CALIMAR-GAN: An unpaired mask-guided attention network for metal artifact reduction in CT scans.

Scardigno RM, Brunetti A, Marvulli PM, Carli R, Dotoli M, Bevilacqua V, Buongiorno D

pubmed logopapersJul 1 2025
High-quality computed tomography (CT) scans are essential for accurate diagnostic and therapeutic decisions, but the presence of metal objects within the body can produce distortions that lower image quality. Deep learning (DL) approaches using image-to-image translation for metal artifact reduction (MAR) show promise over traditional methods but often introduce secondary artifacts. Additionally, most rely on paired simulated data due to limited availability of real paired clinical data, restricting evaluation on clinical scans to qualitative analysis. This work presents CALIMAR-GAN, a generative adversarial network (GAN) model that employs a guided attention mechanism and the linear interpolation algorithm to reduce artifacts using unpaired simulated and clinical data for targeted artifact reduction. Quantitative evaluations on simulated images demonstrated superior performance, achieving a PSNR of 31.7, SSIM of 0.877, and Fréchet inception distance (FID) of 22.1, outperforming state-of-the-art methods. On real clinical images, CALIMAR-GAN achieved the lowest FID (32.7), validated as a valuable complement to qualitative assessments through correlation with pixel-based metrics (r=-0.797 with PSNR, p<0.01; r=-0.767 with MS-SSIM, p<0.01). This work advances DL-based artifact reduction into clinical practice with high-fidelity reconstructions that enhance diagnostic accuracy and therapeutic outcomes. Code is available at https://github.com/roberto722/calimar-gan.
Page 28 of 55541 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.