Artificial intelligence for detecting acute heart failure on chest CT: prospective clinical proof-of-concept validation.
Authors
Affiliations (9)
Affiliations (9)
- Department of Cardiology, Copenhagen University Hospital-Bispebjerg and Frederiksberg, Copenhagen, Denmark. [email protected].
- Department of Clinical Medicine, University of Copenhagen, Copenhagen, Denmark. [email protected].
- Department of Computer Science, University of Copenhagen, Copenhagen, Denmark.
- Department of Cardiology, Copenhagen University Hospital-Bispebjerg and Frederiksberg, Copenhagen, Denmark.
- Department of Clinical Medicine, University of Copenhagen, Copenhagen, Denmark.
- Department of Cardiology, Copenhagen University Hospital-Amager and Hvidovre, Copenhagen, Denmark.
- Department of Radiology, Copenhagen University Hospital-Bispebjerg and Frederiksberg, Copenhagen, Denmark.
- Department of Radiology, Copenhagen University Hospital-Herlev Gentofte, Copenhagen, Denmark.
- Department of Radiology and Nuclear Medicine, Erasmus MC-University Medical Center Rotterdam, Rotterdam, The Netherlands.
Abstract
Acute heart failure (AHF) is a common but underrecognized cause of dyspnea. Chest computed tomography (CT) can accurately assess pulmonary congestion, but radiologist reporting capacity may limit clinical utility. We hypothesized that an artificial intelligence (AI) model could automatically detect imaging signs of AHF and aimed to prospectively validate an AI model in an independent emergency department cohort, benchmarking its performance against radiologists and cardiologists. We prospectively validated a supervised machine-learning model in a single-center study of dyspneic patients undergoing low-dose, non-contrast chest CT and echocardiography. The primary analysis assessed diagnostic performance for CT-detected pulmonary congestion compatible with AHF, using radiologist-reported AHF as the reference and the area under the curve at receiver operating characteristic analysis (AUROC). Secondary analyses compared the AI model with blinded research radiologists and expert cardiologists. Of 234 patients (56% males), aged 74 ± 10 years (mean ± standard deviation), 61 (26%) had radiologist-reported AHF. The AI model achieved high diagnostic performance (AUROC 0.95 [95% confidence interval 0.93-0.98]), with 89% sensitivity [78-95] and 89% specificity [83-93]. At prespecified thresholds, rule-out maximized sensitivity (97% [89-100]) at the expense of specificity (74% [67-81]), whereas rule-in yielded high specificity (96% [92-98]) but lower sensitivity (66% [52-77]). In secondary analyses, the AI model achieved a median AUROC of 0.94 (range 0.91-0.96). The AI model demonstrated high diagnostic performance for detecting AHF on chest CT in dyspneic patients. Integration into emergency workflows may support more consistent diagnosis, independent of clinician experience or time constraints. AI-based analysis of chest CT may enable earlier and more consistent detection of AHF, supporting timely triage and management, especially when specialist radiological expertise is limited or delayed. An AI model prospectively detected AHF on chest CT in dyspneic emergency department patients. In a prospective single-center cohort, AI achieved high diagnostic performance (AUROC 0.91-0.96), comparable to that of radiologists and cardiologists. AI-based chest CT interpretation may improve diagnostic consistency in the absence of standardized CT criteria for AHF.