Sort by:
Page 6 of 17168 results

AI-based identification of patients who benefit from revascularization: a multicenter study

Zhang, W., Miller, R. J., Patel, K., Shanbhag, A., Liang, J., Lemley, M., Ramirez, G., Builoff, V., Yi, J., Zhou, J., Kavanagh, P., Acampa, W., Bateman, T. M., Di Carli, M. F., Dorbala, S., Einstein, A. J., Fish, M. B., Hauser, M. T., Ruddy, T., Kaufmann, P. A., Miller, E. J., Sharir, T., Martins, M., Halcox, J., Chareonthaitawee, P., Dey, D., Berman, D., Slomka, P.

medrxiv logopreprintJun 12 2025
Background and AimsRevascularization in stable coronary artery disease often relies on ischemia severity, but we introduce an AI-driven approach that uses clinical and imaging data to estimate individualized treatment effects and guide personalized decisions. MethodsUsing a large, international registry from 13 centers, we developed an AI model to estimate individual treatment effects by simulating outcomes under alternative therapeutic strategies. The model was trained on an internal cohort constructed using 1:1 propensity score matching to emulate randomized controlled trials (RCTs), creating balanced patient pairs in which only the treatment strategy--early revascularization (defined as any procedure within 90 days of MPI) versus medical therapy--differed. This design allowed the model to estimate individualized treatment effects, forming the basis for counterfactual reasoning at the patient level. We then derived the AI-REVASC score, which quantifies the potential benefit, for each patient, of early revascularization. The score was validated in the held-out testing cohort using Cox regression. ResultsOf 45,252 patients, 19,935 (44.1%) were female, median age 65 (IQR: 57-73). During a median follow-up of 3.6 years (IQR: 2.7-4.9), 4,323 (9.6%) experienced MI or death. The AI model identified a group (n=1,335, 5.9%) that benefits from early revascularization with a propensity-adjusted hazard ratio of 0.50 (95% CI: 0.25-1.00). Patients identified for early revascularization had higher prevalence of hypertension, diabetes, dyslipidemia, and lower LVEF. ConclusionsThis study pioneers a scalable, data-driven approach that emulates randomized trials using retrospective data. The AI-REVASC score enables precision revascularization decisions where guidelines and RCTs fall short. Graphical Abstract O_FIG O_LINKSMALLFIG WIDTH=200 HEIGHT=104 SRC="FIGDIR/small/25329295v1_ufig1.gif" ALT="Figure 1"> View larger version (31K): [email protected]@1df75d8org.highwire.dtl.DTLVardef@1b1ce68org.highwire.dtl.DTLVardef@663cdf_HPS_FORMAT_FIGEXP M_FIG C_FIG

Automated Segmentation of Thoracic Aortic Lumen and Vessel Wall on 3D Bright- and Black-Blood MRI using nnU-Net.

Cesario M, Littlewood SJ, Nadel J, Fletcher TJ, Fotaki A, Castillo-Passi C, Hajhosseiny R, Pouliopoulos J, Jabbour A, Olivero R, Rodríguez-Palomares J, Kooi ME, Prieto C, Botnar RM

pubmed logopapersJun 11 2025
Magnetic resonance angiography (MRA) is an important tool for aortic assessment in several cardiovascular diseases. Assessment of MRA images relies on manual segmentation; a time-intensive process that is subject to operator variability. We aimed to optimize and validate two deep-learning models for automatic segmentation of the aortic lumen and vessel wall in high-resolution ECG-triggered free-breathing respiratory motion-corrected 3D bright- and black-blood MRA images. Manual segmentation, serving as the ground truth, was performed on 25 bright-blood and 15 black-blood 3D MRA image sets acquired with the iT2PrepIR-BOOST sequence (1.5T) in thoracic aortopathy patients. The training was performed with nnU-Net for bright-blood (lumen) and black-blood image sets (lumen and vessel wall). Training consisted of a 70:20:10% training: validation: testing split. Inference was run on datasets (single vendor) from different centres (UK, Spain, and Australia), sequences (iT2PrepIR-BOOST, T2 prepared CMRA, and TWIST MRA), acquired resolutions (from 0.9 mm<sup>3</sup> to 3 mm<sup>3</sup>), and field strengths (0.55T, 1.5T, and 3T). Predictive measurements comprised Dice Similarity Coefficient (DSC), and Intersection over Union (IoU). Postprocessing (3D slicer) included centreline extraction, diameter measurement, and curved planar reformatting (CPR). The optimal configuration was the 3D U-Net. Bright blood segmentation at 1.5T on iT2PrepIR-BOOST datasets (1.3 and 1.8 mm<sup>3</sup>) and 3D CMRA datasets (0.9 mm<sup>3</sup>) resulted in DSC ≥ 0.96 and IoU ≥ 0.92. For bright-blood segmentation on 3D CMRA at 0.55T, the nnUNet achieved DSC and IoU scores of 0.93 and 0.88 at 1.5 mm³, and 0.68 and 0.52 at 3.0 mm³, respectively. DSC and IoU scores of 0.89 and 0.82 were obtained for CMRA image sets (1 mm<sup>3</sup>) at 1.5T (Barcelona dataset). DSC and IoU score of the BRnnUNet model were 0.90 and 0.82 respectively for the contrast-enhanced dataset (TWIST MRA). Lumen segmentation on black blood 1.5T iT2PrepIR-BOOST image sets achieved DSC ≥ 0.95 and IoU ≥ 0.90, and vessel wall segmentation resulted in DSC ≥ 0.80 and IoU ≥ 0.67. Automated centreline tracking, diameter measurement and CPR were successfully implemented in all subjects. Automated aortic lumen and wall segmentation on 3D bright- and black-blood image sets demonstrated excellent agreement with ground truth. This technique demonstrates a fast and comprehensive assessment of aortic morphology with great potential for future clinical application in various cardiovascular diseases.

DCD: A Semantic Segmentation Model for Fetal Ultrasound Four-Chamber View

Donglian Li, Hui Guo, Minglang Chen, Huizhen Chen, Jialing Chen, Bocheng Liang, Pengchen Liang, Ying Tan

arxiv logopreprintJun 10 2025
Accurate segmentation of anatomical structures in the apical four-chamber (A4C) view of fetal echocardiography is essential for early diagnosis and prenatal evaluation of congenital heart disease (CHD). However, precise segmentation remains challenging due to ultrasound artifacts, speckle noise, anatomical variability, and boundary ambiguity across different gestational stages. To reduce the workload of sonographers and enhance segmentation accuracy, we propose DCD, an advanced deep learning-based model for automatic segmentation of key anatomical structures in the fetal A4C view. Our model incorporates a Dense Atrous Spatial Pyramid Pooling (Dense ASPP) module, enabling superior multi-scale feature extraction, and a Convolutional Block Attention Module (CBAM) to enhance adaptive feature representation. By effectively capturing both local and global contextual information, DCD achieves precise and robust segmentation, contributing to improved prenatal cardiac assessment.

SSS: Semi-Supervised SAM-2 with Efficient Prompting for Medical Imaging Segmentation

Hongjie Zhu, Xiwei Liu, Rundong Xue, Zeyu Zhang, Yong Xu, Daji Ergu, Ying Cai, Yang Zhao

arxiv logopreprintJun 10 2025
In the era of information explosion, efficiently leveraging large-scale unlabeled data while minimizing the reliance on high-quality pixel-level annotations remains a critical challenge in the field of medical imaging. Semi-supervised learning (SSL) enhances the utilization of unlabeled data by facilitating knowledge transfer, significantly improving the performance of fully supervised models and emerging as a highly promising research direction in medical image analysis. Inspired by the ability of Vision Foundation Models (e.g., SAM-2) to provide rich prior knowledge, we propose SSS (Semi-Supervised SAM-2), a novel approach that leverages SAM-2's robust feature extraction capabilities to uncover latent knowledge in unlabeled medical images, thus effectively enhancing feature support for fully supervised medical image segmentation. Specifically, building upon the single-stream "weak-to-strong" consistency regularization framework, this paper introduces a Discriminative Feature Enhancement (DFE) mechanism to further explore the feature discrepancies introduced by various data augmentation strategies across multiple views. By leveraging feature similarity and dissimilarity across multi-scale augmentation techniques, the method reconstructs and models the features, thereby effectively optimizing the salient regions. Furthermore, a prompt generator is developed that integrates Physical Constraints with a Sliding Window (PCSW) mechanism to generate input prompts for unlabeled data, fulfilling SAM-2's requirement for additional prompts. Extensive experiments demonstrate the superiority of the proposed method for semi-supervised medical image segmentation on two multi-label datasets, i.e., ACDC and BHSD. Notably, SSS achieves an average Dice score of 53.15 on BHSD, surpassing the previous state-of-the-art method by +3.65 Dice. Code will be available at https://github.com/AIGeeksGroup/SSS.

Snap-and-tune: combining deep learning and test-time optimization for high-fidelity cardiovascular volumetric meshing

Daniel H. Pak, Shubh Thaker, Kyle Baylous, Xiaoran Zhang, Danny Bluestein, James S. Duncan

arxiv logopreprintJun 9 2025
High-quality volumetric meshing from medical images is a key bottleneck for physics-based simulations in personalized medicine. For volumetric meshing of complex medical structures, recent studies have often utilized deep learning (DL)-based template deformation approaches to enable fast test-time generation with high spatial accuracy. However, these approaches still exhibit limitations, such as limited flexibility at high-curvature areas and unrealistic inter-part distances. In this study, we introduce a simple yet effective snap-and-tune strategy that sequentially applies DL and test-time optimization, which combines fast initial shape fitting with more detailed sample-specific mesh corrections. Our method provides significant improvements in both spatial accuracy and mesh quality, while being fully automated and requiring no additional training labels. Finally, we demonstrate the versatility and usefulness of our newly generated meshes via solid mechanics simulations in two different software platforms. Our code is available at https://github.com/danpak94/Deep-Cardiac-Volumetric-Mesh.

Integration of artificial intelligence into cardiac ultrasonography practice.

Shaulian SY, Gala D, Makaryus AN

pubmed logopapersJun 9 2025
Over the last several decades, echocardiography has made numerous technological advancements, with one of the most significant being the integration of artificial intelligence (AI). AI algorithms assist novice operators to acquire diagnostic-quality images and automate complex analyses. This review explores the integration of AI into various echocardiographic modalities, including transthoracic, transesophageal, intracardiac, and point-of-care ultrasound. It examines how AI enhances image acquisition, streamlines analysis, and improves diagnostic performance across routine, critical care, and complex cardiac imaging. To conduct this review, PubMed was searched using targeted keywords aligned with each section of the paper, focusing primarily on peer-reviewed articles published from 2020 onward. Earlier studies were included when foundational or frequently cited. The findings were organized thematically to highlight clinical relevance and practical applications. Challenges persist in clinical application, including algorithmic bias, ethical concerns, and the need for clinician training and AI oversight. Despite these, AI's potential to revolutionize cardiovascular care through precision and accessibility remains unparalleled, with benefits likely to far outweigh obstacles if appropriately applied and implemented in cardiac ultrasonography.

MHASegNet: A multi-scale hybrid aggregation network of segmenting coronary artery from CCTA images.

Li S, Wu Y, Jiang B, Liu L, Zhang T, Sun Y, Hou J, Monkam P, Qian W, Qi S

pubmed logopapersJun 9 2025
Segmentation of coronary arteries in Coronary Computed Tomography Angiography (CCTA) images is crucial for diagnosing coronary artery disease (CAD), but remains challenging due to small artery size, uneven contrast distribution, and issues like over-segmentation or omission. The aim of this study is to improve coronary artery segmentation in CCTA images using both conventional and deep learning techniques. We propose MHASegNet, a lightweight network for coronary artery segmentation, combined with a tailored refinement method. MHASegNet employs multi-scale hybrid attention to capture global and local features, and integrates a 3D context anchor attention module to focus on key coronary artery structures while suppressing background noise. An iterative, region-growth-based refinement addresses crown breaks and reduces false alarms. We evaluated the method on an in-house dataset of 90 subjects and two public datasets with 1060 subjects. MHASegNet, coupled with tailored refinement, outperforms state-of-the-art algorithms, achieving a Dice Similarity Coefficient (DSC) of 0.867 on the in-house dataset, 0.875 on the ASOCA dataset, and 0.827 on the ImageCAS dataset. The tailored refinement significantly reduces false positives and resolves most discontinuities, even for other networks. MHASegNet and the tailored refinement may aid in diagnosing and quantifying CAD following further validation.

Deep learning-based post-hoc noise reduction improves quarter-radiation-dose coronary CT angiography.

Morikawa T, Nishii T, Tanabe Y, Yoshida K, Toshimori W, Fukuyama N, Toritani H, Suekuni H, Fukuda T, Kido T

pubmed logopapersJun 9 2025
To evaluate the impact of deep learning-based post-hoc noise reduction (DLNR) on image quality, coronary artery disease reporting and data system (CAD-RADS) assessment, and diagnostic performance in quarter-dose versus full-dose coronary CT angiography (CCTA) on external datasets. We retrospectively reviewed 221 patients who underwent retrospective electrocardiogram-gated CCTA in 2022-2023. Using dose modulation, either mid-diastole or end-systole was scanned at full dose depending on heart rates, and the other phase at quarter dose. Only patients with motion-free coronaries in both phases were included. Images were acquired using iterative reconstruction, and a residual dense network trained on external datasets denoised the quarter-dose images. Image quality was assessed by comparing noise levels using Tukey's test. Two radiologists independently assessed CAD-RADS, with agreement to full-dose images evaluated by Cohen's kappa. Diagnostic performance for significant stenosis referencing full-dose images was compared between quarter-dose and denoised images by the area under the receiver operating characteristic curve (AUC) using the DeLong test. Among 40 cases (age, 71 ± 7 years; 24 males), DLNR reduced noise from 37 to 18 HU (P < 0.001) in quarter-dose CCTA (full-dose images: 22 HU), and improved CAD-RADS agreement from moderate (0.60 [95 % CI: 0.41-0.78]) to excellent (0.82 [95 % CI: 0.66-0.94]). Denoised images demonstrated a superior AUC (0.97 [95 % CI: 0.95-1.00]) for diagnosing significant stenosis compared with original quarter-dose images (0.93 [95 % CI: 0.89-0.98]; P = 0.032). DLNR for quarter-dose CCTA significantly improved image quality, CAD-RADS agreement, and diagnostic performance for detecting significant stenosis referencing full-dose images.

Deep learning-based prospective slice tracking for continuous catheter visualization during MRI-guided cardiac catheterization.

Neofytou AP, Kowalik G, Vidya Shankar R, Kunze K, Moon T, Mellor N, Neji R, Razavi R, Pushparajah K, Roujol S

pubmed logopapersJun 8 2025
This proof-of-concept study introduces a novel, deep learning-based, parameter-free, automatic slice-tracking technique for continuous catheter tracking and visualization during MR-guided cardiac catheterization. The proposed sequence includes Calibration and Runtime modes. Initially, Calibration mode identifies the catheter tip's three-dimensional coordinates using a fixed stack of contiguous slices. A U-Net architecture with a ResNet-34 encoder is used to identify the catheter tip location. Once identified, the sequence then switches to Runtime mode, dynamically acquiring three contiguous slices automatically centered on the catheter tip. The catheter location is estimated from each Runtime stack using the same network and fed back to the sequence, enabling prospective slice tracking to keep the catheter in the central slice. If the catheter remains unidentified over several dynamics, the sequence reverts to Calibration mode. This artificial intelligence (AI)-based approach was evaluated prospectively in a three-dimensional-printed heart phantom and 3 patients undergoing MR-guided cardiac catheterization. This technique was also compared retrospectively in 2 patients with a previous non-AI automatic tracking method relying on operator-defined parameters. In the phantom study, the tracking framework achieved 100% accuracy/sensitivity/specificity in both modes. Across all patients, the average accuracy/sensitivity/specificity were 100 ± 0/100 ± 0/100 ± 0% (Calibration) and 98.4 ± 0.8/94.1 ± 2.9/100.0 ± 0.0% (Runtime). The parametric, non-AI technique and the proposed parameter-free AI-based framework yielded identical accuracy (100%) in Calibration mode and similar accuracy range in Runtime mode (Patients 1 and 2: 100%-97%, and 100%-98%, respectively). An AI-based prospective slice-tracking framework was developed for real-time, parameter-free, operator-independent, automatic tracking of gadolinium-filled balloon catheters. Its feasibility was successfully demonstrated in patients undergoing MRI-guided cardiac catheterization.

Diffusion-based image translation model from low-dose chest CT to calcium scoring CT with random point sampling.

Jung JH, Lee JE, Lee HS, Yang DH, Lee JG

pubmed logopapersJun 7 2025
Coronary artery calcium (CAC) scoring is an important method for cardiovascular risk assessment. While artificial intelligence (AI) has been applied to automate CAC scoring in calcium scoring computed tomography (CSCT), its application to low-dose computed tomography (LDCT) scans, typically used for lung cancer screening, remains challenging due to the lower image quality and higher noise levels of LDCT. This study presents a diffusion model-based method for converting LDCT to CSCT images with the aim of improving CAC scoring accuracy from LDCT scans. A conditional diffusion model was developed to generate CSCT images from LDCT by modifying the denoising diffusion implicit model (DDIM) sampling process. Two main modifications were introduced: (1) random pointing, a novel sampling technique that enhances the trajectory guidance methodology of DDIM using stochastic Gaussian noise to optimize domain adaptation, and (2) intermediate sampling, an advanced methodology that strategically injects minimal noise into LDCT images prior to sampling to maximize structural preservation. The model was trained on LDCT and CSCT images obtained from the same patients but acquired separately at different time points and patient positions, and validated on 37 test cases. The proposed method showed superior performance compared to widely used image-to-image models (CycleGAN, CUT, DCLGAN, NEGCUT) across several evaluation metrics, including peak signal-to-noise ratio (39.93 ± 0.44), Local Normalized Cross-Correlation (0.97 ± 0.01), structural similarity index (0.97 ± 0.01), and Dice similarity coefficient (0.73 ± 0.10). The modifications to the sampling process reduced the number of iterations from 1000 to 10 while maintaining image quality. Volumetric analysis indicated a stronger correlation between the calcium volumes in the enhanced CSCT images and expert-verified annotations, as compared to the original LDCT images. The proposed method effectively transforms LDCT images to CSCT images while preserving anatomical structures and calcium deposits. The reduction in sampling time and the improved preservation of calcium structures suggest that the method could be applicable for clinical use in cardiovascular risk assessment.
Page 6 of 17168 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.