Sort by:
Page 14 of 38375 results

A generalizable diffusion framework for 3D low-dose and few-view cardiac SPECT imaging.

Xie H, Gan W, Ji W, Chen X, Alashi A, Thorn SL, Zhou B, Liu Q, Xia M, Guo X, Liu YH, An H, Kamilov US, Wang G, Sinusas AJ, Liu C

pubmed logopapersJul 30 2025
Myocardial perfusion imaging using SPECT is widely utilized to diagnose coronary artery diseases, but image quality can be negatively affected in low-dose and few-view acquisition settings. Although various deep learning methods have been introduced to improve image quality from low-dose or few-view SPECT data, previous approaches often fail to generalize across different acquisition settings, limiting realistic applicability. This work introduced DiffSPECT-3D, a diffusion framework for 3D cardiac SPECT imaging that effectively adapts to different acquisition settings without requiring further network re-training or fine-tuning. Using both image and projection data, a consistency strategy is proposed to ensure that diffusion sampling at each step aligns with the low-dose/few-view projection measurements, the image data, and the scanner geometry, thus enabling generalization to different low-dose/few-view settings. Incorporating anatomical spatial information from CT and total variation constraint, we proposed a 2.5D conditional strategy to allow DiffSPECT-3D to observe 3D contextual information from the entire image volume, addressing the 3D memory/computational issues in diffusion model. We extensively evaluated the proposed method on 1,325 clinical <sup>99m</sup>Tc tetrofosmin stress/rest studies from 795 patients. Each study was reconstructed into 5 different low-count levels and 5 different projection few-view levels for model evaluations, ranging from 1% to 50% and from 1 view to 9 view, respectively. Validated against cardiac catheterization results and diagnostic review from nuclear cardiologists, the presented results show the potential to achieve low-dose and few-view SPECT imaging without compromising clinical performance. Additionally, DiffSPECT-3D could be directly applied to full-dose SPECT images to further improve image quality, especially in a low-dose stress-first cardiac SPECT imaging protocol.

Recovering Diagnostic Value: Super-Resolution-Aided Echocardiographic Classification in Resource-Constrained Imaging

Krishan Agyakari Raja Babu, Om Prabhu, Annu, Mohanasankar Sivaprakasam

arxiv logopreprintJul 30 2025
Automated cardiac interpretation in resource-constrained settings (RCS) is often hindered by poor-quality echocardiographic imaging, limiting the effectiveness of downstream diagnostic models. While super-resolution (SR) techniques have shown promise in enhancing magnetic resonance imaging (MRI) and computed tomography (CT) scans, their application to echocardiography-a widely accessible but noise-prone modality-remains underexplored. In this work, we investigate the potential of deep learning-based SR to improve classification accuracy on low-quality 2D echocardiograms. Using the publicly available CAMUS dataset, we stratify samples by image quality and evaluate two clinically relevant tasks of varying complexity: a relatively simple Two-Chamber vs. Four-Chamber (2CH vs. 4CH) view classification and a more complex End-Diastole vs. End-Systole (ED vs. ES) phase classification. We apply two widely used SR models-Super-Resolution Generative Adversarial Network (SRGAN) and Super-Resolution Residual Network (SRResNet), to enhance poor-quality images and observe significant gains in performance metric-particularly with SRResNet, which also offers computational efficiency. Our findings demonstrate that SR can effectively recover diagnostic value in degraded echo scans, making it a viable tool for AI-assisted care in RCS, achieving more with less.

Cardiac-CLIP: A Vision-Language Foundation Model for 3D Cardiac CT Images

Yutao Hu, Ying Zheng, Shumei Miao, Xiaolei Zhang, Jiahao Xia, Yaolei Qi, Yiyang Zhang, Yuting He, Qian Chen, Jing Ye, Hongyan Qiao, Xiuhua Hu, Lei Xu, Jiayin Zhang, Hui Liu, Minwen Zheng, Yining Wang, Daimin Zhang, Ji Zhang, Wenqi Shao, Yun Liu, Longjiang Zhang, Guanyu Yang

arxiv logopreprintJul 29 2025
Foundation models have demonstrated remarkable potential in medical domain. However, their application to complex cardiovascular diagnostics remains underexplored. In this paper, we present Cardiac-CLIP, a multi-modal foundation model designed for 3D cardiac CT images. Cardiac-CLIP is developed through a two-stage pre-training strategy. The first stage employs a 3D masked autoencoder (MAE) to perform self-supervised representation learning from large-scale unlabeled volumetric data, enabling the visual encoder to capture rich anatomical and contextual features. In the second stage, contrastive learning is introduced to align visual and textual representations, facilitating cross-modal understanding. To support the pre-training, we collect 16641 real clinical CT scans, supplemented by 114k publicly available data. Meanwhile, we standardize free-text radiology reports into unified templates and construct the pathology vectors according to diagnostic attributes, based on which the soft-label matrix is generated to supervise the contrastive learning process. On the other hand, to comprehensively evaluate the effectiveness of Cardiac-CLIP, we collect 6,722 real-clinical data from 12 independent institutions, along with the open-source data to construct the evaluation dataset. Specifically, Cardiac-CLIP is comprehensively evaluated across multiple tasks, including cardiovascular abnormality classification, information retrieval and clinical analysis. Experimental results demonstrate that Cardiac-CLIP achieves state-of-the-art performance across various downstream tasks in both internal and external data. Particularly, Cardiac-CLIP exhibits great effectiveness in supporting complex clinical tasks such as the prospective prediction of acute coronary syndrome, which is notoriously difficult in real-world scenarios.

Determining the scanning range of coronary computed tomography angiography based on deep learning.

Zhao YH, Fan YH, Wu XY, Qin T, Sun QT, Liang BH

pubmed logopapersJul 28 2025
Coronary computed tomography angiography (CCTA) is essential for diagnosing coronary artery disease as it provides detailed images of the heart's blood vessels to identify blockages or abnormalities. Traditionally, determining the computed tomography (CT) scanning range has relied on manual methods due to limited automation in this area. To develop and evaluate a novel deep learning approach to automate the determination of CCTA scan ranges using anteroposterior scout images. A retrospective analysis was conducted on chest CT data from 1388 patients at the Radiology Department of the First Affiliated Hospital of a university-affiliated hospital, collected between February 27 and March 27, 2024. A deep learning model was trained on anteroposterior scout images with annotations based on CCTA standards. The dataset was split into training (672 cases), validation (167 cases), and test (167 cases) sets to ensure robust model evaluation. The study demonstrated exceptional performance on the test set, achieving a mean average precision (mAP50) of 0.995 and mAP50-95 of 0.994 for determining CCTA scan ranges. This study demonstrates that: (1) Anteroposterior scout images can effectively estimate CCTA scan ranges; and (2) Estimates can be dynamically adjusted to meet the needs of various medical institutions.

Accelerating cardiac radial-MRI: Fully polar based technique using compressed sensing and deep learning.

Ghodrati V, Duan J, Ali F, Bedayat A, Prosper A, Bydder M

pubmed logopapersJul 26 2025
Fast radial-MRI approaches based on compressed sensing (CS) and deep learning (DL) often use non-uniform fast Fourier transform (NUFFT) as the forward imaging operator, which might introduce interpolation errors and reduce image quality. Using the polar Fourier transform (PFT), we developed fully polar CS and DL algorithms for fast 2D cardiac radial-MRI. Our methods directly reconstruct images in polar spatial space from polar k-space data, eliminating frequency interpolation and ensuring an easy-to-compute data consistency term for the DL framework via the variable splitting (VS) scheme. Furthermore, PFT reconstruction produces initial images with fewer artifacts in a reduced field of view, making it a better starting point for CS and DL algorithms, especially for dynamic imaging, where information from a small region of interest is critical, as opposed to NUFFT, which often results in global streaking artifacts. In the cardiac region, PFT-based CS technique outperformed NUFFT-based CS at acceleration rates of 5x (mean SSIM: 0.8831 vs. 0.8526), 10x (0.8195 vs. 0.7981), and 15x (0.7720 vs. 0.7503). Our PFT(VS)-DL technique outperformed the NUFFT(GD)-based DL method, which used unrolled gradient descent with the NUFFT as the forward imaging operator, with mean SSIM scores of 0.8914 versus 0.8617 at 10x and 0.8470 versus 0.8301 at 15x. Radiological assessments revealed that PFT(VS)-based DL scored 2.9±0.30 and 2.73±0.45 at 5x and 10x, whereas NUFFT(GD)-based DL scored 2.7±0.47 and 2.40±0.50, respectively. Our methods suggest a promising alternative to NUFFT-based fast radial-MRI for dynamic imaging, prioritizing reconstruction quality in a small region of interest over whole image quality.

Artificial intelligence-powered software outperforms interventional cardiologists in assessment of IVUS-based stent optimization.

Rubio PM, Garcia-Garcia HM, Galo J, Chaturvedi A, Case BC, Mintz GS, Ben-Dor I, Hashim H, Waksman R

pubmed logopapersJul 26 2025
Optimal stent deployment assessed by intravascular ultrasound (IVUS) is associated with improved outcomes after percutaneous coronary intervention (PCI). However, IVUS remains underutilized due to its time-consuming analysis and reliance on operator expertise. AVVIGO™+, an FDA-approved artificial intelligence (AI) software, offers automated lesion assessment, but its performance for stent evaluation has not been thoroughly investigated. To assess whether an artificial intelligence-powered software (AVVIGO™+) provides a superior evaluation of IVUS-based stent expansion index (%Stent expansion = Minimum Stent Area (MSA) / Distal reference lumen area) and geographic miss (i.e. >50 % plaque burden - PB - at stent edges) compared to the current gold standard method performed by interventional cardiologists (IC), defined as frame-by-frame visual assessment by interventional cardiologists, selecting the MSA and the reference frame with the largest lumen area within 5 mm of the stent edge, following expert consensus. This retrospective study included 60 patients (47,997 IVUS frames) who underwent IVUS guided PCI, independently analyzed by IC and AVVIGO™+. Assessments included minimum stent area (MSA), stent expansion index, and PB at proximal and distal reference segments. For expansion, a threshold of 80 % was used to define suboptimal results. The time required for expansion analysis was recorded for both methods. Concordance, absolute and relative differences were evaluated. AVVIGO™ + consistently identified lower mean expansion (70.3 %) vs. IC (91.2 %), (p < 0.0001), primarily due to detecting frames with smaller MSA values (5.94 vs. 7.19 mm<sup>2</sup>, p = 0.0053). This led to 25 discordant cases in which AVVIGO™ + reported suboptimal expansion while IC classified the result as adequate. The analysis time was significantly shorter with AVVIGO™ + (0.76 ± 0.39 min) vs IC (1.89 ± 0.62 min) (p < 0.0001), representing a 59.7 % reduction. For geographic miss, AVVIGO™ + reported higher PB than IC at both distal (51.8 % vs. 43.0 %, p < 0.0001) and proximal (50.0 % vs. 43.0 %, p = 0.0083) segments. When applying the 50 % PB threshold, AVVIGO™ + identified PB ≥50 % not seen by IC in 12 cases (6 distal, 6 proximal). AVVIGO™ + demonstrated improved detection of suboptimal stent expansion and geographic miss compared to interventional cardiologists, while also significantly reducing analysis time. These findings suggest that AI-based platforms may offer a more reliable and efficient approach to IVUS-guided stent optimization, with potential to enhance consistency in clinical practice.

Artificial intelligence-assisted compressed sensing CINE enhances the workflow of cardiac magnetic resonance in challenging patients.

Wang H, Schmieder A, Watkins M, Wang P, Mitchell J, Qamer SZ, Lanza G

pubmed logopapersJul 26 2025
A key cardiac magnetic resonance (CMR) challenge is breath-holding duration, difficult for cardiac patients. To evaluate whether artificial intelligence-assisted compressed sensing CINE (AI-CS-CINE) reduces image acquisition time of CMR compared to conventional CINE (C-CINE). Cardio-oncology patients (<i>n</i> = 60) and healthy volunteers (<i>n</i> = 29) underwent sequential C-CINE and AI-CS-CINE with a 1.5-T scanner. Acquisition time, visual image quality assessment, and biventricular metrics (end-diastolic volume, end-systolic volume, stroke volume, ejection fraction, left ventricular mass, and wall thickness) were analyzed and compared between C-CINE and AI-CS-CINE with Bland-Altman analysis, and calculation of intraclass coefficient (ICC). In 89 participants (58.5 ± 16.8 years, 42 males, 47 females), total AI-CS-CINE acquisition and reconstruction time (37 seconds) was 84% faster than C-CINE (238 seconds). C-CINE required repeats in 23% (20/89) of cases (approximately 8 minutes lost), while AI-CS-CINE only needed one repeat (1%; 2 seconds lost). AI-CS-CINE had slightly lower contrast but preserved structural clarity. Bland-Altman plots and ICC (0.73 ≤ <i>r</i> ≤ 0.98) showed strong agreement for left ventricle (LV) and right ventricle (RV) metrics, including those in the cardiac amyloidosis subgroup (<i>n</i> = 31). AI-CS-CINE enabled faster, easier imaging in patients with claustrophobia, dyspnea, arrhythmias, or restlessness. Motion-artifacted C-CINE images were reliably interpreted from AI-CS-CINE. AI-CS-CINE accelerated CMR image acquisition and reconstruction, preserved anatomical detail, and diminished impact of patient-related motion. Quantitative AI-CS-CINE metrics agreed closely with C-CINE in cardio-oncology patients, including the cardiac amyloidosis cohort, as well as healthy volunteers regardless of left and right ventricular size and function. AI-CS-CINE significantly enhanced CMR workflow, particularly in challenging cases. The strong analytical concordance underscores reliability and robustness of AI-CS-CINE as a valuable tool.

Deep Learning-Based Multi-View Echocardiographic Framework for Comprehensive Diagnosis of Pericardial Disease

Jeong, S., Moon, I., Jeon, J., Jeong, D., Lee, J., kim, J., Lee, S.-A., Jang, Y., Yoon, Y. E., Chang, H.-J.

medrxiv logopreprintJul 25 2025
BackgroundPericardial disease exhibits a wide clinical spectrum, ranging from mild effusions to life-threatening tamponade or constriction pericarditis. While transthoracic echocardiography (TTE) is the primary diagnostic modality, its effectiveness is limited by operator dependence and incomplete evaluation of functional impact. Existing artificial intelligence models focus primarily on effusion detection, lacking comprehensive disease assessment. MethodsWe developed a deep learning (DL)-based framework that sequentially assesses pericardial disease: (1) morphological changes, including pericardial effusion amount (normal/small/moderate/large) and pericardial thickening or adhesion (yes/no), using five B-mode views, and (2) hemodynamic significance (yes/no), incorporating additional inputs from Doppler and inferior vena cava measurements. The developmental dataset comprises 2,253 TTEs from multiple Korean institutions (225 for internal testing), and the independent external test set consists of 274 TTEs. ResultsIn the internal test set, the model achieved diagnostic accuracy of 81.8-97.3% for pericardial effusion classification, 91.6% for pericardial thickening/adhesion, and 86.2% for hemodynamic significance. Corresponding accuracy in the external test set was 80.3-94.2%, 94.5%, and 85.5%, respectively. Area under the receiver operating curves (AUROCs) for the three tasks in the internal test set was 0.92-0.99, 0.90, and 0.79; and in the external test set, 0.95-0.98, 0.85, and 0.76. Sensitivity for detecting pericardial thickening/adhesion and hemodynamic significance was modest (66.7% and 68.8% in the internal test set), but improved substantially when cases with poor image quality were excluded (77.3%, and 80.8%). Similar performance gains were observed in subgroups with complete target views and a higher number of available video clips. ConclusionsThis study presents the first DL-based TTE model capable of comprehensive evaluation of pericardial disease, integrating both morphological and functional assessments. The proposed framework demonstrated strong generalizability and aligned with the real-world diagnostic workflow. However, caution is warranted when interpreting results under suboptimal imaging conditions.

A multi-dynamic low-rank deep image prior (ML-DIP) for real-time 3D cardiovascular MRI

Chong Chen, Marc Vornehm, Preethi Chandrasekaran, Muhammad A. Sultan, Syed M. Arshad, Yingmin Liu, Yuchi Han, Rizwan Ahmad

arxiv logopreprintJul 25 2025
Purpose: To develop a reconstruction framework for 3D real-time cine cardiovascular magnetic resonance (CMR) from highly undersampled data without requiring fully sampled training data. Methods: We developed a multi-dynamic low-rank deep image prior (ML-DIP) framework that models spatial image content and temporal deformation fields using separate neural networks. These networks are optimized per scan to reconstruct the dynamic image series directly from undersampled k-space data. ML-DIP was evaluated on (i) a 3D cine digital phantom with simulated premature ventricular contractions (PVCs), (ii) ten healthy subjects (including two scanned during both rest and exercise), and (iii) five patients with PVCs. Phantom results were assessed using peak signal-to-noise ratio (PSNR) and structural similarity index measure (SSIM). In vivo performance was evaluated by comparing left-ventricular function quantification (against 2D real-time cine) and image quality (against 2D real-time cine and binning-based 5D-Cine). Results: In the phantom study, ML-DIP achieved PSNR > 29 dB and SSIM > 0.90 for scan times as short as two minutes, while recovering cardiac motion, respiratory motion, and PVC events. In healthy subjects, ML-DIP yielded functional measurements comparable to 2D cine and higher image quality than 5D-Cine, including during exercise with high heart rates and bulk motion. In PVC patients, ML-DIP preserved beat-to-beat variability and reconstructed irregular beats, whereas 5D-Cine showed motion artifacts and information loss due to binning. Conclusion: ML-DIP enables high-quality 3D real-time CMR with acceleration factors exceeding 1,000 by learning low-rank spatial and temporal representations from undersampled data, without relying on external fully sampled training datasets.

Extreme Cardiac MRI Analysis under Respiratory Motion: Results of the CMRxMotion Challenge

Kang Wang, Chen Qin, Zhang Shi, Haoran Wang, Xiwen Zhang, Chen Chen, Cheng Ouyang, Chengliang Dai, Yuanhan Mo, Chenchen Dai, Xutong Kuang, Ruizhe Li, Xin Chen, Xiuzheng Yue, Song Tian, Alejandro Mora-Rubio, Kumaradevan Punithakumar, Shizhan Gong, Qi Dou, Sina Amirrajab, Yasmina Al Khalil, Cian M. Scannell, Lexiaozi Fan, Huili Yang, Xiaowu Sun, Rob van der Geest, Tewodros Weldebirhan Arega, Fabrice Meriaudeau, Caner Özer, Amin Ranem, John Kalkhof, İlkay Öksüz, Anirban Mukhopadhyay, Abdul Qayyum, Moona Mazher, Steven A Niederer, Carles Garcia-Cabrera, Eric Arazo, Michal K. Grzeszczyk, Szymon Płotka, Wanqin Ma, Xiaomeng Li, Rongjun Ge, Yongqing Kou, Xinrong Chen, He Wang, Chengyan Wang, Wenjia Bai, Shuo Wang

arxiv logopreprintJul 25 2025
Deep learning models have achieved state-of-the-art performance in automated Cardiac Magnetic Resonance (CMR) analysis. However, the efficacy of these models is highly dependent on the availability of high-quality, artifact-free images. In clinical practice, CMR acquisitions are frequently degraded by respiratory motion, yet the robustness of deep learning models against such artifacts remains an underexplored problem. To promote research in this domain, we organized the MICCAI CMRxMotion challenge. We curated and publicly released a dataset of 320 CMR cine series from 40 healthy volunteers who performed specific breathing protocols to induce a controlled spectrum of motion artifacts. The challenge comprised two tasks: 1) automated image quality assessment to classify images based on motion severity, and 2) robust myocardial segmentation in the presence of motion artifacts. A total of 22 algorithms were submitted and evaluated on the two designated tasks. This paper presents a comprehensive overview of the challenge design and dataset, reports the evaluation results for the top-performing methods, and further investigates the impact of motion artifacts on five clinically relevant biomarkers. All resources and code are publicly available at: https://github.com/CMRxMotion
Page 14 of 38375 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.