Sort by:
Page 19 of 38375 results

Predicting Cardiopulmonary Exercise Testing Performance in Patients Undergoing Transthoracic Echocardiography - An AI Based, Multimodal Model

Alishetti, S., Pan, W., Beecy, A. N., Liu, Z., Gong, A., Huang, Z., Clerkin, K. J., Goldsmith, R. L., Majure, D. T., Kelsey, C., vanMaanan, D., Ruhl, J., Tesfuzigta, N., Lancet, E., Kumaraiah, D., Sayer, G., Estrin, D., Weinberger, K., Kuleshov, V., Wang, F., Uriel, N.

medrxiv logopreprintJul 6 2025
Background and AimsTransthoracic echocardiography (TTE) is a widely available tool for diagnosing and managing heart failure but has limited predictive value for survival. Cardiopulmonary exercise test (CPET) performance strongly correlates with survival in heart failure patients but is less accessible. We sought to develop an artificial intelligence (AI) algorithm using TTE and electronic medical records to predict CPET peak oxygen consumption (peak VO2) [≤] 14 mL/kg/min. MethodsAn AI model was trained to predict peak VO2 [≤] 14 mL/kg/min from TTE images, structured TTE reports, demographics, medications, labs, and vitals. The training set included patients with a TTE within 6 months of a CPET. Performance was retrospectively tested in a held-out group from the development cohort and an external validation cohort. Results1,127 CPET studies paired with concomitant TTE were identified. The best performance was achieved by using all components (TTE images, all structured clinical data). The model performed well at predicting a peak VO2 [≤] 14 mL/kg/min, with an AUROC of 0.84 (development cohort) and 0.80 (external validation cohort). It performed consistently well using higher ([≤] 18 mL/kg/min) and lower ([≤] 12 mL/kg/min) cut-offs. ConclusionsThis multimodal AI model effectively categorized patients into low and high risk predicted peak VO2, demonstrating the potential to identify previously unrecognized patients in need of advanced heart failure therapies where CPET is not available.

DHR-Net: Dynamic Harmonized registration network for multimodal medical images.

Yang X, Li D, Chen S, Deng L, Wang J, Huang S

pubmed logopapersJul 5 2025
Although deep learning has driven remarkable advancements in medical image registration, deep neural network-based non-rigid deformation field generation methods demonstrate high accuracy in single-modality scenarios. However, multi-modal medical image registration still faces critical challenges. To address the issues of insufficient anatomical consistency and unstable deformation field optimization in cross-modal registration tasks among existing methods, this paper proposes an end-to-end medical image registration method based on a Dynamic Harmonized Registration framework (DHR-Net). DHR-Net employs a cascaded two-stage architecture, comprising a translation network and a registration network that operate in sequential processing phases. Furthermore, we propose a loss function based on the Noise Contrastive Estimation framework, which enhances anatomical consistency in cross-modal translation by maximizing mutual information between input and transformed image patches. This loss function incorporates a dynamic temperature adjustment mechanism that progressively optimizes feature contrast constraints during training to improve high-frequency detail preservation, thereby better constraining the topological structure of target images. Experiments conducted on the M&M Heart Dataset demonstrate that DHR-Net outperforms existing methods in registration accuracy, deformation field smoothness, and cross-modal robustness. The framework significantly enhances the registration quality of cardiac images while demonstrating exceptional performance in preserving anatomical structures, exhibiting promising potential for clinical applications.

Knowledge, attitudes, and practices of cardiovascular health care personnel regarding coronary CTA and AI-assisted diagnosis: a cross-sectional study.

Jiang S, Ma L, Pan K, Zhang H

pubmed logopapersJul 4 2025
Artificial intelligence (AI) holds significant promise for medical applications, particularly in coronary computed tomography angiography (CTA). We assessed the knowledge, attitudes, and practices (KAP) of cardiovascular health care personnel regarding coronary CTA and AI-assisted diagnosis. We conducted a cross-sectional survey from 1 July to 1 August 2024 at Tsinghua University Hospital, Beijing, China. Healthcare professionals, including both physicians and nurses, aged ≥18 years were eligible to participate. We used a structured questionnaire to collect demographic information and KAP scores. We analysed the data using correlation and regression methods, along with structural equation modelling. Among 496 participants, 58.5% were female, 52.6% held a bachelor's degree, and 40.7% worked in radiology. Mean KAP scores were 13.87 (standard deviation (SD) = 4.96, possible range = 0-20) for knowledge, 28.25 (SD = 4.35, possible range = 8-40) for attitude, and 31.67 (SD = 8.23, possible range = 10-50) for practice. Knowledge (r = 0.358; P < 0.001) and attitude positively correlated with practice (r = 0.489; P < 0.001). Multivariate logistic regression indicated that educational level, department affiliation, and job satisfaction were significant predictors of knowledge. Attitude was influenced by marital status, department, and years of experience, while practice was shaped by knowledge, attitude, departmental factors, and job satisfaction. Structural equation modelling showed that knowledge was directly affected by gender (β = -0.121; P = 0.009), workplace (β = -0.133; P = 0.004), department (β = -0.197; P < 0.001), employment status (β = -0.166; P < 0.001), and night shift frequency (β = 0.163; P < 0.001). Attitude was directly influenced by marriage (β = 0.124; P = 0.006) and job satisfaction (β = -0.528; P < 0.001). Practice was directly affected by knowledge (β = 0.389; P < 0.001), attitude (β = 0.533; P < 0.001), and gender (β = -0.092; P = 0.010). Additionally, gender (β = -0.051; P = 0.010) and marriage (β = 0.066; P = 0.007) had indirect effects on practice. Cardiovascular health care personnel exhibited suboptimal knowledge, positive attitudes, and relatively inactive practices regarding coronary CTA and AI-assisted diagnosis. Targeted educational efforts are needed to enhance knowledge and support the integration of AI into clinical workflows.

Fine-tuning of language models for automated structuring of medical exam reports to improve patient screening and analysis.

Elvas LB, Santos R, Ferreira JC

pubmed logopapersJul 4 2025
The analysis of medical imaging reports is labour-intensive but crucial for accurate diagnosis and effective patient screening. Often presented as unstructured text, these reports require systematic organisation for efficient interpretation. This study applies Natural Language Processing (NLP) techniques tailored for European Portuguese to automate the analysis of cardiology reports, streamlining patient screening. Using a methodology involving tokenization, part-of-speech tagging and manual annotation, the MediAlbertina PT-PT language model was fine-tuned, achieving 96.13% accuracy in entity recognition. The system enables rapid identification of conditions such as aortic stenosis through an interactive interface, substantially reducing the time and effort required for manual review. It also facilitates patient monitoring and disease quantification, optimising healthcare resource allocation. This research highlights the potential of NLP tools in Portuguese healthcare contexts, demonstrating their applicability to medical report analysis and their broader relevance in improving efficiency and decision-making in diverse clinical environments.

Novel CAC Dispersion and Density Score to Predict Myocardial Infarction and Cardiovascular Mortality.

Huangfu G, Ihdayhid AR, Kwok S, Konstantopoulos J, Niu K, Lu J, Smallbone H, Figtree GA, Chow CK, Dembo L, Adler B, Hamilton-Craig C, Grieve SM, Chan MTV, Butler C, Tandon V, Nagele P, Woodard PK, Mrkobrada M, Szczeklik W, Aziz YFA, Biccard B, Devereaux PJ, Sheth T, Dwivedi G, Chow BJW

pubmed logopapersJul 4 2025
Coronary artery calcification (CAC) provides robust prediction for major adverse cardiovascular events (MACE), but current techniques disregard plaque distribution and protective effects of high CAC density. We investigated whether a novel CAC-dispersion and density (CAC-DAD) score will exhibit superior prognostic value compared with the Agatston score (AS) for MACE prediction. We conducted a multicenter, retrospective, cross-sectional study of 961 patients (median age, 67 years; 61% male) who underwent cardiac computed tomography for cardiovascular or perioperative risk assessment. Blinded analyzers applied deep learning algorithms to noncontrast scans to calculate the CAC-DAD score, which adjusts for the spatial distribution of CAC and assigns a protective weight factor for lesions with ≥1000 Hounsfield units. Associations were assessed using frailty regression. Over a median follow-up of 30 (30-460) days, 61 patients experienced MACE (nonfatal myocardial infarction or cardiovascular mortality). An elevated CAC-DAD score (≥2050 based on optimal cutoff) captured more MACE than AS ≥400 (74% versus 57%; <i>P</i>=0.002). Univariable analysis revealed that an elevated CAC-DAD score, AS ≥400 and AS ≥100, age, diabetes, hypertension, and statin use predicted MACE. On multivariable analysis, only the CAC-DAD score (hazard ratio, 2.57 [95% CI, 1.43-4.61]; <i>P</i>=0.002), age, statins, and diabetes remained significant. The inclusion of the CAC-DAD score in a predictive model containing demographic factors and AS improved the C statistic from 0.61 to 0.66 (<i>P</i>=0.008). The fully automated CAC-DAD score improves MACE prediction compared with the AS. Patients with a high CAC-DAD score, including those with a low AS, may be at higher risk and warrant intensification of their preventative therapies.

TABNet: A Triplet Augmentation Self-Recovery Framework with Boundary-Aware Pseudo-Labels for Medical Image Segmentation

Peilin Zhang, Shaouxan Wua, Jun Feng, Zhuo Jin, Zhizezhang Gao, Jingkun Chen, Yaqiong Xing, Xiao Zhang

arxiv logopreprintJul 3 2025
Background and objective: Medical image segmentation is a core task in various clinical applications. However, acquiring large-scale, fully annotated medical image datasets is both time-consuming and costly. Scribble annotations, as a form of sparse labeling, provide an efficient and cost-effective alternative for medical image segmentation. However, the sparsity of scribble annotations limits the feature learning of the target region and lacks sufficient boundary supervision, which poses significant challenges for training segmentation networks. Methods: We propose TAB Net, a novel weakly-supervised medical image segmentation framework, consisting of two key components: the triplet augmentation self-recovery (TAS) module and the boundary-aware pseudo-label supervision (BAP) module. The TAS module enhances feature learning through three complementary augmentation strategies: intensity transformation improves the model's sensitivity to texture and contrast variations, cutout forces the network to capture local anatomical structures by masking key regions, and jigsaw augmentation strengthens the modeling of global anatomical layout by disrupting spatial continuity. By guiding the network to recover complete masks from diverse augmented inputs, TAS promotes a deeper semantic understanding of medical images under sparse supervision. The BAP module enhances pseudo-supervision accuracy and boundary modeling by fusing dual-branch predictions into a loss-weighted pseudo-label and introducing a boundary-aware loss for fine-grained contour refinement. Results: Experimental evaluations on two public datasets, ACDC and MSCMR seg, demonstrate that TAB Net significantly outperforms state-of-the-art methods for scribble-based weakly supervised segmentation. Moreover, it achieves performance comparable to that of fully supervised methods.

3D Heart Reconstruction from Sparse Pose-agnostic 2D Echocardiographic Slices

Zhurong Chen, Jinhua Chen, Wei Zhuo, Wufeng Xue, Dong Ni

arxiv logopreprintJul 3 2025
Echocardiography (echo) plays an indispensable role in the clinical practice of heart diseases. However, ultrasound imaging typically provides only two-dimensional (2D) cross-sectional images from a few specific views, making it challenging to interpret and inaccurate for estimation of clinical parameters like the volume of left ventricle (LV). 3D ultrasound imaging provides an alternative for 3D quantification, but is still limited by the low spatial and temporal resolution and the highly demanding manual delineation. To address these challenges, we propose an innovative framework for reconstructing personalized 3D heart anatomy from 2D echo slices that are frequently used in clinical practice. Specifically, a novel 3D reconstruction pipeline is designed, which alternatively optimizes between the 3D pose estimation of these 2D slices and the 3D integration of these slices using an implicit neural network, progressively transforming a prior 3D heart shape into a personalized 3D heart model. We validate the method with two datasets. When six planes are used, the reconstructed 3D heart can lead to a significant improvement for LV volume estimation over the bi-plane method (error in percent: 1.98\% VS. 20.24\%). In addition, the whole reconstruction framework makes even an important breakthrough that can estimate RV volume from 2D echo slices (with an error of 5.75\% ). This study provides a new way for personalized 3D structure and function analysis from cardiac ultrasound and is of great potential in clinical practice.

CineMyoPS: Segmenting Myocardial Pathologies from Cine Cardiac MR

Wangbin Ding, Lei Li, Junyi Qiu, Bogen Lin, Mingjing Yang, Liqin Huang, Lianming Wu, Sihan Wang, Xiahai Zhuang

arxiv logopreprintJul 3 2025
Myocardial infarction (MI) is a leading cause of death worldwide. Late gadolinium enhancement (LGE) and T2-weighted cardiac magnetic resonance (CMR) imaging can respectively identify scarring and edema areas, both of which are essential for MI risk stratification and prognosis assessment. Although combining complementary information from multi-sequence CMR is useful, acquiring these sequences can be time-consuming and prohibitive, e.g., due to the administration of contrast agents. Cine CMR is a rapid and contrast-free imaging technique that can visualize both motion and structural abnormalities of the myocardium induced by acute MI. Therefore, we present a new end-to-end deep neural network, referred to as CineMyoPS, to segment myocardial pathologies, \ie scars and edema, solely from cine CMR images. Specifically, CineMyoPS extracts both motion and anatomy features associated with MI. Given the interdependence between these features, we design a consistency loss (resembling the co-training strategy) to facilitate their joint learning. Furthermore, we propose a time-series aggregation strategy to integrate MI-related features across the cardiac cycle, thereby enhancing segmentation accuracy for myocardial pathologies. Experimental results on a multi-center dataset demonstrate that CineMyoPS achieves promising performance in myocardial pathology segmentation, motion estimation, and anatomy segmentation.

Multimodal AI to forecast arrhythmic death in hypertrophic cardiomyopathy.

Lai C, Yin M, Kholmovski EG, Popescu DM, Lu DY, Scherer E, Binka E, Zimmerman SL, Chrispin J, Hays AG, Phelan DM, Abraham MR, Trayanova NA

pubmed logopapersJul 2 2025
Sudden cardiac death from ventricular arrhythmias is a leading cause of mortality worldwide. Arrhythmic death prognostication is challenging in patients with hypertrophic cardiomyopathy (HCM), a setting where current clinical guidelines show low performance and inconsistent accuracy. Here, we present a deep learning approach, MAARS (Multimodal Artificial intelligence for ventricular Arrhythmia Risk Stratification), to forecast lethal arrhythmia events in patients with HCM by analyzing multimodal medical data. MAARS' transformer-based neural networks learn from electronic health records, echocardiogram and radiology reports, and contrast-enhanced cardiac magnetic resonance images, the latter being a unique feature of this model. MAARS achieves an area under the curve of 0.89 (95% confidence interval (CI) 0.79-0.94) and 0.81 (95% CI 0.69-0.93) in internal and external cohorts and outperforms current clinical guidelines by 0.27-0.35 (internal) and 0.22-0.30 (external). In contrast to clinical guidelines, it demonstrates fairness across demographic subgroups. We interpret MAARS' predictions on multiple levels to promote artificial intelligence transparency and derive risk factors warranting further investigation.
Page 19 of 38375 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.