Sort by:
Page 7 of 18177 results

An Adaptive SCG-ECG Multimodal Gating Framework for Cardiac CTA.

Ganesh S, Abozeed M, Aziz U, Tridandapani S, Bhatti PT

pubmed logopapersJun 1 2025
Cardiovascular disease (CVD) is the leading cause of death worldwide. Coronary artery disease (CAD), a prevalent form of CVD, is typically assessed using catheter coronary angiography (CCA), an invasive, costly procedure with associated risks. While cardiac computed tomography angiography (CTA) presents a less invasive alternative, it suffers from limited temporal resolution, often resulting in motion artifacts that degrade diagnostic quality. Traditional ECG-based gating methods for CTA inadequately capture cardiac mechanical motion. To address this, we propose a novel multimodal approach that enhances CTA imaging by predicting cardiac quiescent periods using seismocardiogram (SCG) and ECG data, integrated through a weighted fusion (WF) approach and artificial neural networks (ANNs). We developed a regression-based ANN framework (r-ANN WF) designed to improve prediction accuracy and reduce computational complexity, which was compared with a classification-based framework (c-ANN WF), ECG gating, and US data. Our results demonstrate that the r-ANN WF approach improved overall diastolic and systolic cardiac quiescence prediction accuracy by 52.6% compared to ECG-based predictions, using ultrasound (US) as the ground truth, with an average prediction time of 4.83 ms. Comparative evaluations based on reconstructed CTA images show that both r-ANN WF and c-ANN WF offer diagnostic quality comparable to US-based gating, underscoring their clinical potential. Additionally, the lower computational complexity of r-ANN WF makes it suitable for real-time applications. This approach could enhance CTA's diagnostic quality, offering a more accurate and efficient method for CVD diagnosis and management.

Deep learning-based acceleration of high-resolution compressed sense MR imaging of the hip.

Marka AW, Meurer F, Twardy V, Graf M, Ebrahimi Ardjomand S, Weiss K, Makowski MR, Gersing AS, Karampinos DC, Neumann J, Woertler K, Banke IJ, Foreman SC

pubmed logopapersJun 1 2025
To evaluate a Compressed Sense Artificial Intelligence framework (CSAI) incorporating parallel imaging, compressed sense (CS), and deep learning for high-resolution MRI of the hip, comparing it with standard-resolution CS imaging. Thirty-two patients with femoroacetabular impingement syndrome underwent 3 T MRI scans. Coronal and sagittal intermediate-weighted TSE sequences with fat saturation were acquired using CS (0.6 ×0.8 mm resolution) and CSAI (0.3 ×0.4 mm resolution) protocols in comparable acquisition times (7:49 vs. 8:07 minutes for both planes). Two readers systematically assessed the depiction of the acetabular and femoral cartilage (in five cartilage zones), labrum, ligamentum capitis femoris, and bone using a five-point Likert scale. Diagnostic confidence and abnormality detection were recorded and analyzed using the Wilcoxon signed-rank test. CSAI significantly improved the cartilage depiction across most cartilage zones compared to CS. Overall Likert scores were 4.0 ± 0.2 (CS) vs 4.2 ± 0.6 (CSAI) for reader 1 and 4.0 ± 0.2 (CS) vs 4.3 ± 0.6 (CSAI) for reader 2 (p ≤ 0.001). Diagnostic confidence increased from 3.5 ± 0.7 and 3.9 ± 0.6 (CS) to 4.0 ± 0.6 and 4.1 ± 0.7 (CSAI) for readers 1 and 2, respectively (p ≤ 0.001). More cartilage lesions were detected with CSAI, with significant improvements in diagnostic confidence in certain cartilage zones such as femoral zone C and D for both readers. Labrum and ligamentum capitis femoris depiction remained similar, while bone depiction was rated lower. No abnormalities detected in CS were missed in CSAI. CSAI provides high-resolution hip MR images with enhanced cartilage depiction without extending acquisition times, potentially enabling more precise hip cartilage assessment.

Deep learning enabled near-isotropic CAIPIRINHA VIBE in the nephrogenic phase improves image quality and renal lesion conspicuity.

Tan Q, Miao J, Nitschke L, Nickel MD, Lerchbaumer MH, Penzkofer T, Hofbauer S, Peters R, Hamm B, Geisel D, Wagner M, Walter-Rittel TC

pubmed logopapersJun 1 2025
Deep learning (DL) accelerated controlled aliasing in parallel imaging results in higher acceleration (CAIPIRINHA)-volumetric interpolated breath-hold examination (VIBE), provides high spatial resolution T1-weighted imaging of the upper abdomen. We aimed to investigate whether DL-CAIPIRINHA-VIBE can improve image quality, vessel conspicuity, and lesion detectability compared to a standard CAIPIRINHA-VIBE in renal imaging at 3 Tesla. In this prospective study, 50 patients with 23 solid and 45 cystic renal lesions underwent MRI with clinical MR sequences, including standard CAIPIRINHA-VIBE and DL-CAIPIRINHA-VIBE sequences in the nephrographic phase at 3 Tesla. Two experienced radiologists independently evaluated both sequences and multiplanar reconstructions (MPR) of the sagittal and coronal planes for image quality with a Likert scale ranging from 1 to 5 (5 =best). Quantitative measurements including the size of the largest lesion and renal lesion contrast ratios were evaluated. DL-CAIPIRINHA-VIBE compared to standard CAIPIRINHA-VIBE showed significantly improved overall image quality, higher scores for renal border delineation, renal sinuses, vessels, adrenal glands, reduced motion artifacts and reduced perceived noise in nephrographic phase images (all p < 0.001). DL-CAIPIRINHA-VIBE with MPR showed superior lesion conspicuity and diagnostic confidence compared to standard CAIPIRINHA-VIBE. However, DL-CAIPIRINHA-VIBE presented a more synthetic appearance and more aliasing artifacts (p < 0.023). The mean size and signal intensity of renal lesions for DL-CAIPIRINHA-VIBE showed no significant differences compared to standard CAIPIRINHA-VIBE (p > 0.9). DL-CAIPIRINHA-VIBE is well suited for kidney imaging in the nephrographic phase, provides good image quality, improved delineation of anatomic structures and renal lesions.

Accelerated High-resolution T1- and T2-weighted Breast MRI with Deep Learning Super-resolution Reconstruction.

Mesropyan N, Katemann C, Leutner C, Sommer A, Isaak A, Weber OM, Peeters JM, Dell T, Bischoff L, Kuetting D, Pieper CC, Lakghomi A, Luetkens JA

pubmed logopapersJun 1 2025
To assess the performance of an industry-developed deep learning (DL) algorithm to reconstruct low-resolution Cartesian T1-weighted dynamic contrast-enhanced (T1w) and T2-weighted turbo-spin-echo (T2w) sequences and compare them to standard sequences. Female patients with indications for breast MRI were included in this prospective study. The study protocol at 1.5 Tesla MRI included T1w and T2w. Both sequences were acquired in standard resolution (T1<sub>S</sub> and T2<sub>S</sub>) and in low-resolution with following DL reconstructions (T1<sub>DL</sub> and T2<sub>DL</sub>). For DL reconstruction, two convolutional networks were used: (1) Adaptive-CS-Net for denoising with compressed sensing, and (2) Precise-Image-Net for resolution upscaling of previously downscaled images. Overall image quality was assessed using 5-point-Likert scale (from 1=non-diagnostic to 5=excellent). Apparent signal-to-noise (aSNR) and contrast-to-noise (aCNR) ratios were calculated. Breast Imaging Reporting and Data System (BI-RADS) agreement between different sequence types was assessed. A total of 47 patients were included (mean age, 58±11 years). Acquisition time for T1<sub>DL</sub> and T2<sub>DL</sub> were reduced by 51% (44 vs. 90 s per dynamic phase) and 46% (102 vs. 192 s), respectively. T1<sub>DL</sub> and T2<sub>DL</sub> showed higher overall image quality (e.g., 4 [IQR, 4-4] for T1<sub>S</sub> vs. 5 [IQR, 5-5] for T1<sub>DL</sub>, P<0.001). Both, T1<sub>DL</sub> and T2<sub>DL</sub> revealed higher aSNR and aCNR than T1<sub>S</sub> and T2<sub>S</sub> (e.g., aSNR: 32.35±10.23 for T2<sub>S</sub> vs. 27.88±6.86 for T2<sub>DL</sub>, P=0.014). Cohen k agreement by BI-RADS assessment was excellent (0.962, P<0.001). DL for denoising and resolution upscaling reduces acquisition time and improves image quality for T1w and T2w breast MRI.

Ultra-fast biparametric MRI in prostate cancer assessment: Diagnostic performance and image quality compared to conventional multiparametric MRI.

Pausch AM, Filleböck V, Elsner C, Rupp NJ, Eberli D, Hötker AM

pubmed logopapersJun 1 2025
To compare the diagnostic performance and image quality of a deep-learning-assisted ultra-fast biparametric MRI (bpMRI) with the conventional multiparametric MRI (mpMRI) for the diagnosis of clinically significant prostate cancer (csPCa). This prospective single-center study enrolled 123 biopsy-naïve patients undergoing conventional mpMRI and additionally ultra-fast bpMRI at 3 T between 06/2023-02/2024. Two radiologists (R1: 4 years and R2: 3 years of experience) independently assigned PI-RADS scores (PI-RADS v2.1) and assessed image quality (mPI-QUAL score) in two blinded study readouts. Weighted Cohen's Kappa (κ) was calculated to evaluate inter-reader agreement. Diagnostic performance was analyzed using clinical data and histopathological results from clinically indicated biopsies. Inter-reader agreement was good for both mpMRI (κ = 0.83) and ultra-fast bpMRI (κ = 0.87). Both readers demonstrated high sensitivity (≥94 %/≥91 %, R1/R2) and NPV (≥96 %/≥95 %) for csPCa detection using both protocols. The more experienced reader mostly showed notably higher specificity (≥77 %/≥53 %), PPV (≥62 %/≥45 %), and diagnostic accuracy (≥82 %/≥65 %) compared to the less experienced reader. There was no significant difference in the diagnostic performance of correctly identifying csPCa between both protocols (p > 0.05). The ultra-fast bpMRI protocol had significantly better image quality ratings (p < 0.001) and achieved a reduction in scan time of 80 % compared to conventional mpMRI. Deep-learning-assisted ultra-fast bpMRI protocols offer a promising alternative to conventional mpMRI for diagnosing csPCa in biopsy-naïve patients with comparable inter-reader agreement and diagnostic performance at superior image quality. However, reader experience remains essential for diagnostic performance.

Incorporating Radiologist Knowledge Into MRI Quality Metrics for Machine Learning Using Rank-Based Ratings.

Tang C, Eisenmenger LB, Rivera-Rivera L, Huo E, Junn JC, Kuner AD, Oechtering TH, Peret A, Starekova J, Johnson KM

pubmed logopapersJun 1 2025
Deep learning (DL) often requires an image quality metric; however, widely used metrics are not designed for medical images. To develop an image quality metric that is specific to MRI using radiologists image rankings and DL models. Retrospective. A total of 19,344 rankings on 2916 unique image pairs from the NYU fastMRI Initiative neuro database was used for the neural network-based image quality metrics training with an 80%/20% training/validation split and fivefold cross-validation. 1.5 T and 3 T T1, T1 postcontrast, T2, and FLuid Attenuated Inversion Recovery (FLAIR). Synthetically corrupted image pairs were ranked by radiologists (N = 7), with a subset also scoring images using a Likert scale (N = 2). DL models were trained to match rankings using two architectures (EfficientNet and IQ-Net) with and without reference image subtraction and compared to ranking based on mean squared error (MSE) and structural similarity (SSIM). Image quality assessing DL models were evaluated as alternatives to MSE and SSIM as optimization targets for DL denoising and reconstruction. Radiologists' agreement was assessed by a percentage metric and quadratic weighted Cohen's kappa. Ranking accuracies were compared using repeated measurements analysis of variance. Reconstruction models trained with IQ-Net score, MSE and SSIM were compared by paired t test. P < 0.05 was considered significant. Compared to direct Likert scoring, ranking produced a higher level of agreement between radiologists (70.4% vs. 25%). Image ranking was subjective with a high level of intraobserver agreement ( <math xmlns="http://www.w3.org/1998/Math/MathML"><mrow><mn>94.9</mn> <mo>%</mo> <mo>±</mo> <mn>2.4</mn> <mo>%</mo></mrow> </math> ) and lower interobserver agreement ( <math xmlns="http://www.w3.org/1998/Math/MathML"><mrow><mn>61.47</mn> <mo>%</mo> <mo>±</mo> <mn>5.51</mn> <mo>%</mo></mrow> </math> ). IQ-Net and EfficientNet accurately predicted rankings with a reference image ( <math xmlns="http://www.w3.org/1998/Math/MathML"><mrow><mn>75.2</mn> <mo>%</mo> <mo>±</mo> <mn>1.3</mn> <mo>%</mo></mrow> </math> and <math xmlns="http://www.w3.org/1998/Math/MathML"><mrow><mn>79.2</mn> <mo>%</mo> <mo>±</mo> <mn>1.7</mn> <mo>%</mo></mrow> </math> ). However, EfficientNet resulted in images with artifacts and high MSE when used in denoising tasks while IQ-Net optimized networks performed well for both denoising and reconstruction tasks. Image quality networks can be trained from image ranking and used to optimize DL tasks. 3 TECHNICAL EFFICACY: Stage 1.

GDP-Net: Global Dependency-Enhanced Dual-Domain Parallel Network for Ring Artifact Removal.

Zhang Y, Liu G, Liu Y, Xie S, Gu J, Huang Z, Ji X, Lyu T, Xi Y, Zhu S, Yang J, Chen Y

pubmed logopapersJun 1 2025
In Computed Tomography (CT) imaging, the ring artifacts caused by the inconsistent detector response can significantly degrade the reconstructed images, having negative impacts on the subsequent applications. The new generation of CT systems based on photon-counting detectors are affected by ring artifacts more severely. The flexibility and variety of detector responses make it difficult to build a well-defined model to characterize the ring artifacts. In this context, this study proposes the global dependency-enhanced dual-domain parallel neural network for Ring Artifact Removal (RAR). First, based on the fact that the features of ring artifacts are different in Cartesian and Polar coordinates, the parallel architecture is adopted to construct the deep neural network so that it can extract and exploit the latent features from different domains to improve the performance of ring artifact removal. Besides, the ring artifacts are globally relevant whether in Cartesian or Polar coordinate systems, but convolutional neural networks show inherent shortcomings in modeling long-range dependency. To tackle this problem, this study introduces the novel Mamba mechanism to achieve a global receptive field without incurring high computational complexity. It enables effective capture of the long-range dependency, thereby enhancing the model performance in image restoration and artifact reduction. The experiments on the simulated data validate the effectiveness of the dual-domain parallel neural network and the Mamba mechanism, and the results on two unseen real datasets demonstrate the promising performance of the proposed RAR algorithm in eliminating ring artifacts and recovering image details.

Explicit Abnormality Extraction for Unsupervised Motion Artifact Reduction in Magnetic Resonance Imaging.

Zhou Y, Li H, Liu J, Kong Z, Huang T, Ahn E, Lv Z, Kim J, Feng DD

pubmed logopapersJun 1 2025
Motion artifacts compromise the quality of magnetic resonance imaging (MRI) and pose challenges to achieving diagnostic outcomes and image-guided therapies. In recent years, supervised deep learning approaches have emerged as successful solutions for motion artifact reduction (MAR). One disadvantage of these methods is their dependency on acquiring paired sets of motion artifact-corrupted (MA-corrupted) and motion artifact-free (MA-free) MR images for training purposes. Obtaining such image pairs is difficult and therefore limits the application of supervised training. In this paper, we propose a novel UNsupervised Abnormality Extraction Network (UNAEN) to alleviate this problem. Our network is capable of working with unpaired MA-corrupted and MA-free images. It converts the MA-corrupted images to MA-reduced images by extracting abnormalities from the MA-corrupted images using a proposed artifact extractor, which intercepts the residual artifact maps from the MA-corrupted MR images explicitly, and a reconstructor to restore the original input from the MA-reduced images. The performance of UNAEN was assessed by experimenting with various publicly available MRI datasets and comparing them with state-of-the-art methods. The quantitative evaluation demonstrates the superiority of UNAEN over alternative MAR methods and visually exhibits fewer residual artifacts. Our results substantiate the potential of UNAEN as a promising solution applicable in real-world clinical environments, with the capability to enhance diagnostic accuracy and facilitate image-guided therapies.

Fine-Tuning Deep Learning Model for Quantitative Knee Joint Mapping With MR Fingerprinting and Its Comparison to Dictionary Matching Method: Fine-Tuning Deep Learning Model for Quantitative MRF.

Zhang X, de Moura HL, Monga A, Zibetti MVW, Regatte RR

pubmed logopapersJun 1 2025
Magnetic resonance fingerprinting (MRF), as an emerging versatile and noninvasive imaging technique, provides simultaneous quantification of multiple quantitative MRI parameters, which have been used to detect changes in cartilage composition and structure in osteoarthritis. Deep learning (DL)-based methods for quantification mapping in MRF overcome the memory constraints and offer faster processing compared to the conventional dictionary matching (DM) method. However, limited attention has been given to the fine-tuning of neural networks (NNs) in DL and fair comparison with DM. In this study, we investigate the impact of training parameter choices on NN performance and compare the fine-tuned NN with DM for multiparametric mapping in MRF. Our approach includes optimizing NN hyperparameters, analyzing the singular value decomposition (SVD) components of MRF data, and optimization of the DM method. We conducted experiments on synthetic data, the NIST/ISMRM MRI system phantom with ground truth, and in vivo knee data from 14 healthy volunteers. The results demonstrate the critical importance of selecting appropriate training parameters, as these significantly affect NN performance. The findings also show that NNs improve the accuracy and robustness of T<sub>1</sub>, T<sub>2</sub>, and T<sub>1ρ</sub> mappings compared to DM in synthetic datasets. For in vivo knee data, the NN achieved comparable results for T<sub>1</sub>, with slightly lower T<sub>2</sub> and slightly higher T<sub>1ρ</sub> measurements compared to DM. In conclusion, the fine-tuned NN can be used to increase accuracy and robustness for multiparametric quantitative mapping from MRF of the knee joint.

A CT-free deep-learning-based attenuation and scatter correction for copper-64 PET in different time-point scans.

Adeli Z, Hosseini SA, Salimi Y, Vahidfar N, Sheikhzadeh P

pubmed logopapersJun 1 2025
This study aimed to develop and evaluate a deep-learning model for attenuation and scatter correction in whole-body 64Cu-based PET imaging. A swinUNETR model was implemented using the MONAI framework. Whole-body PET-nonAC and PET-CTAC image pairs were used for training, where PET-nonAC served as the input and PET-CTAC as the output. Due to the limited number of Cu-based PET/CT images, a model pre-trained on 51 Ga-PSMA PET images was fine-tuned on 15 Cu-based PET images via transfer learning. The model was trained without freezing layers, adapting learned features to the Cu-based dataset. For testing, six additional Cu-based PET images were used, representing 1-h, 12-h, and 48-h time points, with two images per group. The model performed best at the 12-h time point, with an MSE of 0.002 ± 0.0004 SUV<sup>2</sup>, PSNR of 43.14 ± 0.08 dB, and SSIM of 0.981 ± 0.002. At 48 h, accuracy slightly decreased (MSE = 0.036 ± 0.034 SUV<sup>2</sup>), but image quality remained high (PSNR = 44.49 ± 1.09 dB, SSIM = 0.981 ± 0.006). At 1 h, the model also showed strong results (MSE = 0.024 ± 0.002 SUV<sup>2</sup>, PSNR = 45.89 ± 5.23 dB, SSIM = 0.984 ± 0.005), demonstrating consistency across time points. Despite the limited size of the training dataset, the use of fine-tuning from a previously pre-trained model yielded acceptable performance. The results demonstrate that the proposed deep learning model can effectively generate PET-DLAC images that closely resemble PET-CTAC images, with only minor errors.
Page 7 of 18177 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.