Sort by:
Page 10 of 28271 results

Deep learning image reconstruction and adaptive statistical iterative reconstruction on coronary artery calcium scoring in high risk population for coronary heart disease.

Zhu L, Shi X, Tang L, Machida H, Yang L, Ma M, Ha R, Shen Y, Wang F, Chen D

pubmed logopapersJul 1 2025
Deep learning image reconstruction (DLIR) technology effectively improves the image quality while maintaining spatial resolution. The impact of DLIR on the quantification of coronary artery calcium (CAC) is still unclear. The purpose of this study was to investigate the effect of DLIR on the quantification of coronary calcium in high-risk populations. A retrospective study was conducted on patients who underwent coronary artery CT angiography (CCTA) at our hospital(China) from February 2022 to September 2022. Raw data were reconstructed with filtered back projection (FBP) reconstruction, 40% and 80% level adaptive statistical iterative reconstruction-veo (ASiR-V 40%, ASiR-V 80%) and low, medium and high-level deep learning algorithm (DLIR-L, DLIR-M, and DLIR-H). Calculate and compare the signal-to-noise and contrast-to-noise ratio, volumetric score, mass scores, and Agaston score of 6 sets of images. There were 178 patients, female (107), mean age (62.43 ± 9.26), and mean BMI (25.33 ± 3.18) kg/m<sup>2</sup>. Compared with FBP, the image noise of ASiR-V and DLIR was significantly reduced (P < 0.001). There was no significant difference in Agaston score, volumetric score, and mass scores among the six reconstruction algorithms (all P > 0.05). Bland-Altman diagram indicated that the Agatston scores of the five reconstruction algorithms showed good agreement with FBP, with DLIR-L(AUC, 110.08; 95% CI: 26.48, 432.92;)and ASIR-V40% (AUC,110.96; 95% CI: 26.23, 431.34;) having the highest consistency with FBP. Compared with FBP, DLIR and ASiR-V improve CT image quality to varying degrees while having no impact on Agatston score-based risk stratification. CACS is a powerful tool for cardiovascular risk stratification, and DLIR can improve image quality without affecting CACS, making it widely applicable in clinical practice.

Deep learning model for low-dose CT late iodine enhancement imaging and extracellular volume quantification.

Yu Y, Wu D, Lan Z, Dai X, Yang W, Yuan J, Xu Z, Wang J, Tao Z, Ling R, Zhang S, Zhang J

pubmed logopapersJul 1 2025
To develop and validate deep learning (DL)-models that denoise late iodine enhancement (LIE) images and enable accurate extracellular volume (ECV) quantification. This study retrospectively included patients with chest discomfort who underwent CT myocardial perfusion + CT angiography + LIE from two hospitals. Two DL models, residual dense network (RDN) and conditional generative adversarial network (cGAN), were developed and validated. 423 patients were randomly divided into training (182 patients), tuning (48 patients), internal validation (92 patients) and external validation group (101 patients). LIE<sub>single</sub> (single-stack image), LIE<sub>averaging</sub> (averaging multiple-stack images), LIE<sub>RDN</sub> (single-stack image denoised by RDN) and LIE<sub>GAN</sub> (single-stack image denoised by cGAN) were generated. We compared image quality score, signal-to-noise (SNR) and contrast-to-noise (CNR) of four LIE sets. The identifiability of denoised images for positive LIE and increased ECV (> 30%) was assessed. The image quality of LIE<sub>GAN</sub> (SNR: 13.3 ± 1.9; CNR: 4.5 ± 1.1) and LIE<sub>RDN</sub> (SNR: 20.5 ± 4.7; CNR: 7.5 ± 2.3) images was markedly better than that of LIE<sub>single</sub> (SNR: 4.4 ± 0.7; CNR: 1.6 ± 0.4). At per-segment level, the area under the curve (AUC) of LIE<sub>RDN</sub> images for LIE evaluation was significantly improved compared with those of LIE<sub>GAN</sub> and LIE<sub>single</sub> images (p = 0.040 and p < 0.001, respectively). Meanwhile, the AUC and accuracy of ECV<sub>RDN</sub> were significantly higher than those of ECV<sub>GAN</sub> and ECV<sub>single</sub> at per-segment level (p < 0.001 for all). RDN model generated denoised LIE images with markedly higher SNR and CNR than the cGAN-model and original images, which significantly improved the identifiability of visual analysis. Moreover, using denoised single-stack images led to accurate CT-ECV quantification. Question Can the developed models denoise CT-derived late iodine enhancement high images and improve signal-to-noise ratio? Findings The residual dense network model significantly improved the image quality for late iodine enhancement and enabled accurate CT- extracellular volume quantification. Clinical relevance The residual dense network model generates denoised late iodine enhancement images with the highest signal-to-noise ratio and enables accurate quantification of extracellular volume.

Super-resolution deep learning reconstruction for improved quality of myocardial CT late enhancement.

Takafuji M, Kitagawa K, Mizutani S, Hamaguchi A, Kisou R, Sasaki K, Funaki Y, Iio K, Ichikawa K, Izumi D, Okabe S, Nagata M, Sakuma H

pubmed logopapersJul 1 2025
Myocardial computed tomography (CT) late enhancement (LE) allows assessment of myocardial scarring. Super-resolution deep learning image reconstruction (SR-DLR) trained on data acquired from ultra-high-resolution CT may improve image quality for CT-LE. Therefore, this study investigated image noise and image quality with SR-DLR compared with conventional DLR (C-DLR) and hybrid iterative reconstruction (hybrid IR). We retrospectively analyzed 30 patients who underwent CT-LE using 320-row CT. The CT protocol comprised stress dynamic CT perfusion, coronary CT angiography, and CT-LE. CT-LE images were reconstructed using three different algorithms: SR-DLR, C-DLR, and hybrid IR. Image noise, signal-to-noise ratio (SNR), contrast-to-noise ratio (CNR), and qualitative image quality scores are in terms of noise reduction, sharpness, visibility of scar and myocardial boarder, and overall image quality. Inter-observer differences in myocardial scar sizing in CT-LE by the three algorithms were also compared. SR-DLR significantly decreased image noise by 35% compared to C-DLR (median 6.2 HU, interquartile range [IQR] 5.6-7.2 HU vs 9.6 HU, IQR 8.4-10.7 HU; p < 0.001) and by 37% compared to hybrid IR (9.8 HU, IQR 8.5-12.0 HU; p < 0.001). SNR and CNR of CT-LE reconstructed using SR-DLR were significantly higher than with C-DLR (both p < 0.001) and hybrid IR (both p < 0.05). All qualitative image quality scores were higher with SR-DLR than those with C-DLR and hybrid IR (all p < 0.001). The inter-observer differences in scar sizing were reduced with SR-DLR and C-DLR compared with hybrid IR (both p = 0.02). SR-DLR reduces image noise and improves image quality of myocardial CT-LE compared with C-DLR and hybrid IR techniques and improves inter-observer reproducibility of scar sizing compared to hybrid IR. The SR-DLR approach has the potential to improve the assessment of myocardial scar by CT late enhancement.

Dynamic frame-by-frame motion correction for 18F-flurpiridaz PET-MPI using convolution neural network

Urs, M., Killekar, A., Builoff, V., Lemley, M., Wei, C.-C., Ramirez, G., Kavanagh, P., Buckley, C., Slomka, P. J.

medrxiv logopreprintJul 1 2025
PurposePrecise quantification of myocardial blood flow (MBF) and flow reserve (MFR) in 18F-flurpiridaz PET significantly relies on motion correction (MC). However, the manual frame-by-frame correction leads to significant inter-observer variability, time-consuming, and requires significant experience. We propose a deep learning (DL) framework for automatic MC of 18F-flurpiridaz PET. MethodsThe method employs a 3D ResNet based architecture that takes 3D PET volumes and outputs motion vectors. It was validated using 5-fold cross-validation on data from 32 sites of a Phase III clinical trial (NCT01347710). Manual corrections from two experienced operators served as ground truth, and data augmentation using simulated vectors enhanced training robustness. The study compared the DL approach to both manual and standard non-AI automatic MC methods, assessing agreement and diagnostic accuracy using minimal segmental MBF and MFR. ResultsThe area under the receiver operating characteristic curves (AUC) for significant CAD were comparable between DL-MC MBF, manual-MC MBF from Operators (AUC=0.897,0.892 and 0.889, respectively; p>0.05), standard non-AI automatic MC (AUC=0.877; p>0.05) and significantly higher than No-MC (AUC=0.835; p<0.05). Similar findings were observed with MFR. The 95% confidence limits for agreement with the operator were {+/-}0.49ml/g/min (mean difference = 0.00) for MFR and {+/-}0.24ml/g/min (mean difference = 0.00) for MBF. ConclusionDL-MC is significantly faster but diagnostically comparable to manual-MC. The quantitative results obtained with DL-MC for MBF and MFR are in excellent agreement with those manually corrected by experienced operators compared to standard non-AI automatic MC in patients undergoing 18F-flurpiridaz PET-MPI.

Automated vs manual cardiac MRI planning: a single-center prospective evaluation of reliability and scan times.

Glessgen C, Crowe LA, Wetzl J, Schmidt M, Yoon SS, Vallée JP, Deux JF

pubmed logopapersJul 1 2025
Evaluating the impact of an AI-based automated cardiac MRI (CMR) planning software on procedure errors and scan times compared to manual planning alone. Consecutive patients undergoing non-stress CMR were prospectively enrolled at a single center (August 2023-February 2024) and randomized into manual, or automated scan execution using prototype software. Patients with pacemakers, targeted indications, or inability to consent were excluded. All patients underwent the same CMR protocol with contrast, in breath-hold (BH) or free breathing (FB). Supervising radiologists recorded procedure errors (plane prescription, forgotten views, incorrect propagation of cardiac planes, and field-of-view mismanagement). Scan times and idle phase (non-acquisition portion) were computed from scanner logs. Most data were non-normally distributed and compared using non-parametric tests. Eighty-two patients (mean age, 51.6 years ± 17.5; 56 men) were included. Forty-four patients underwent automated and 38 manual CMRs. The mean rate of procedure errors was significantly (p = 0.01) lower in the automated (0.45) than in the manual group (1.13). The rate of error-free examinations was higher (p = 0.03) in the automated (31/44; 70.5%) than in the manual group (17/38; 44.7%). Automated studies were shorter than manual studies in FB (30.3 vs 36.5 min, p < 0.001) but had similar durations in BH (42.0 vs 43.5 min, p = 0.42). The idle phase was lower in automated studies for FB and BH strategies (both p < 0.001). An AI-based automated software performed CMR at a clinical level with fewer planning errors and improved efficiency compared to manual planning. Question What is the impact of an AI-based automated CMR planning software on procedure errors and scan times compared to manual planning alone? Findings Software-driven examinations were more reliable (71% error-free) than human-planned ones (45% error-free) and showed improved efficiency with reduced idle time. Clinical relevance CMR examinations require extensive technologist training, and continuous attention, and involve many planning steps. A fully automated software reliably acquired non-stress CMR potentially reducing mistake risk and increasing data homogeneity.

Artificial intelligence-powered coronary artery disease diagnosis from SPECT myocardial perfusion imaging: a comprehensive deep learning study.

Hajianfar G, Gharibi O, Sabouri M, Mohebi M, Amini M, Yasemi MJ, Chehreghani M, Maghsudi M, Mansouri Z, Edalat-Javid M, Valavi S, Bitarafan Rajabi A, Salimi Y, Arabi H, Rahmim A, Shiri I, Zaidi H

pubmed logopapersJul 1 2025
Myocardial perfusion imaging (MPI) using single-photon emission computed tomography (SPECT) is a well-established modality for noninvasive diagnostic assessment of coronary artery disease (CAD). However, the time-consuming and experience-dependent visual interpretation of SPECT images remains a limitation in the clinic. We aimed to develop advanced models to diagnose CAD using different supervised and semi-supervised deep learning (DL) algorithms and training strategies, including transfer learning and data augmentation, with SPECT-MPI and invasive coronary angiography (ICA) as standard of reference. A total of 940 patients who underwent SPECT-MPI were enrolled (281 patients included ICA). Quantitative perfusion SPECT (QPS) was used to extract polar maps of rest and stress states. We defined two different tasks, including (1) Automated CAD diagnosis with expert reader (ER) assessment of SPECT-MPI as reference, and (2) CAD diagnosis from SPECT-MPI based on reference ICA reports. In task 2, we used 6 strategies for training DL models. We implemented 13 different DL models along with 4 input types with and without data augmentation (WAug and WoAug) to train, validate, and test the DL models (728 models). One hundred patients with ICA as standard of reference (the same patients in task 1) were used to evaluate models per vessel and per patient. Metrics, such as the area under the receiver operating characteristics curve (AUC), accuracy, sensitivity, specificity, precision, and balanced accuracy were reported. DeLong and pairwise Wilcoxon rank sum tests were respectively used to compare models and strategies after 1000 bootstraps on the test data for all models. We also compared the performance of our best DL model to ER's diagnosis. In task 1, DenseNet201 Late Fusion (AUC = 0.89) and ResNet152V2 Late Fusion (AUC = 0.83) models outperformed other models in per-vessel and per-patient analyses, respectively. In task 2, the best models for CAD prediction based on ICA were Strategy 3 (a combination of ER- and ICA-based diagnosis in train data), WoAug InceptionResNetV2 EarlyFusion (AUC = 0.71), and Strategy 5 (semi-supervised approach) WoAug ResNet152V2 EarlyFusion (AUC = 0.77) in per-vessel and per-patient analyses, respectively. Moreover, saliency maps showed that models could be helpful for focusing on relevant spots for decision making. Our study confirmed the potential of DL-based analysis of SPECT-MPI polar maps in CAD diagnosis. In the automation of ER-based diagnosis, models' performance was promising showing accuracy close to expert-level analysis. It demonstrated that using different strategies of data combination, such as including those with and without ICA, along with different training methods, like semi-supervised learning, can increase the performance of DL models. The proposed DL models could be coupled with computer-aided diagnosis systems and be used as an assistant to nuclear medicine physicians to improve their diagnosis and reporting, but only in the LAD territory. Not applicable.

Unsupervised Cardiac Video Translation Via Motion Feature Guided Diffusion Model

Swakshar Deb, Nian Wu, Frederick H. Epstein, Miaomiao Zhang

arxiv logopreprintJul 1 2025
This paper presents a novel motion feature guided diffusion model for unpaired video-to-video translation (MFD-V2V), designed to synthesize dynamic, high-contrast cine cardiac magnetic resonance (CMR) from lower-contrast, artifact-prone displacement encoding with stimulated echoes (DENSE) CMR sequences. To achieve this, we first introduce a Latent Temporal Multi-Attention (LTMA) registration network that effectively learns more accurate and consistent cardiac motions from cine CMR image videos. A multi-level motion feature guided diffusion model, equipped with a specialized Spatio-Temporal Motion Encoder (STME) to extract fine-grained motion conditioning, is then developed to improve synthesis quality and fidelity. We evaluate our method, MFD-V2V, on a comprehensive cardiac dataset, demonstrating superior performance over the state-of-the-art in both quantitative metrics and qualitative assessments. Furthermore, we show the benefits of our synthesized cine CMRs improving downstream clinical and analytical tasks, underscoring the broader impact of our approach. Our code is publicly available at https://github.com/SwaksharDeb/MFD-V2V.

Coronary p-Graph: Automatic classification and localization of coronary artery stenosis from Cardiac CTA using DSA-based annotations.

Zhang Y, Zhang X, He Y, Zang S, Liu H, Liu T, Zhang Y, Chen Y, Shu H, Coatrieux JL, Tang H, Zhang L

pubmed logopapersJul 1 2025
Coronary artery disease (CAD) is a prevalent cardiovascular condition with profound health implications. Digital subtraction angiography (DSA) remains the gold standard for diagnosing vascular disease, but its invasiveness and procedural demands underscore the need for alternative diagnostic approaches. Coronary computed tomography angiography (CCTA) has emerged as a promising non-invasive method for accurately classifying and localizing coronary artery stenosis. However, the complexity of CCTA images and their dependence on manual interpretation highlight the essential role of artificial intelligence in supporting clinicians in stenosis detection. This paper introduces a novel framework, Coronaryproposal-based Graph Convolutional Networks (Coronary p-Graph), designed for the automated detection of coronary stenosis from CCTA scans. The framework transforms CCTA data into curved multi-planar reformation (CMPR) images that delineate the coronary artery centerline. After aligning the CMPR volume along this centerline, the entire vasculature is analyzed using a convolutional neural network (CNN) for initial feature extraction. Based on predefined criteria informed by prior knowledge, the model generates candidate stenotic segments, termed "proposals," which serve as graph nodes. The spatial relationships between nodes are then modeled as edges, constructing a graph representation that is processed using a graph convolutional network (GCN) for precise classification and localization of stenotic segments. All CCTA images were rigorously annotated by three expert radiologists, using DSA reports as the reference standard. This novel methodology offers diagnostic performance equivalent to invasive DSA based solely on non-invasive CCTA, potentially reducing the need for invasive procedures. The proposed method was evaluated on a retrospective dataset comprising 259 cases, each with paired CCTA and corresponding DSA reports. Quantitative analyses demonstrated the superior performance of our approach compared to existing methods, with the following metrics: accuracy of 0.844, specificity of 0.910, area under the receiver operating characteristic curve (AUC) of 0.74, and mean absolute error (MAE) of 0.157.

Unsupervised Cardiac Video Translation Via Motion Feature Guided Diffusion Model

Swakshar Deb, Nian Wu, Frederick H. Epstein, Miaomiao Zhang

arxiv logopreprintJul 1 2025
This paper presents a novel motion feature guided diffusion model for unpaired video-to-video translation (MFD-V2V), designed to synthesize dynamic, high-contrast cine cardiac magnetic resonance (CMR) from lower-contrast, artifact-prone displacement encoding with stimulated echoes (DENSE) CMR sequences. To achieve this, we first introduce a Latent Temporal Multi-Attention (LTMA) registration network that effectively learns more accurate and consistent cardiac motions from cine CMR image videos. A multi-level motion feature guided diffusion model, equipped with a specialized Spatio-Temporal Motion Encoder (STME) to extract fine-grained motion conditioning, is then developed to improve synthesis quality and fidelity. We evaluate our method, MFD-V2V, on a comprehensive cardiac dataset, demonstrating superior performance over the state-of-the-art in both quantitative metrics and qualitative assessments. Furthermore, we show the benefits of our synthesized cine CMRs improving downstream clinical and analytical tasks, underscoring the broader impact of our approach. Our code is publicly available at https://github.com/SwaksharDeb/MFD-V2V.

Integrating multi-scale information and diverse prompts in large model SAM-Med2D for accurate left ventricular ejection fraction estimation.

Wu Y, Zhao T, Hu S, Wu Q, Chen Y, Huang X, Zheng Z

pubmed logopapersJul 1 2025
Left ventricular ejection fraction (LVEF) is a critical indicator of cardiac function, aiding in the assessment of heart conditions. Accurate segmentation of the left ventricle (LV) is essential for LVEF calculation. However, current methods are often limited by small datasets and exhibit poor generalization. While leveraging large models can address this issue, many fail to capture multi-scale information and introduce additional burdens on users to generate prompts. To overcome these challenges, we propose LV-SAM, a model based on the large model SAM-Med2D, for accurate LV segmentation. It comprises three key components: an image encoder with a multi-scale adapter (MSAd), a multimodal prompt encoder (MPE), and a multi-scale decoder (MSD). The MSAd extracts multi-scale information at the encoder level and fine-tunes the model, while the MSD employs skip connections to effectively utilize multi-scale information at the decoder level. Additionally, we introduce an automated pipeline for generating self-extracted dense prompts and use a large language model to generate text prompts, reducing the user burden. The MPE processes these prompts, further enhancing model performance. Evaluations on the CAMUS dataset show that LV-SAM outperforms existing SOAT methods in LV segmentation, achieving the lowest MAE of 5.016 in LVEF estimation.
Page 10 of 28271 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.