A View-Agnostic Deep Learning Framework for Comprehensive Analysis of 2D-Echocardiography

Authors

Anisuzzaman, D. M.,Malins, J. G.,Jackson, J. I.,Lee, E.,Naser, J. A.,Rostami, B.,Bird, J. G.,Spiegelstein, D.,Amar, T.,Ngo, C. C.,Oh, J. K.,Pellikka, P. A.,Thaden, J. J.,Lopez-Jimenez, F.,Poterucha, T. J.,Friedman, P. A.,Pislaru, S.,Kane, G. C.,Attia, Z. I.

Affiliations (1)

  • Mayo Clinic

Abstract

Echocardiography traditionally requires experienced operators to select and interpret clips from specific viewing angles. Clinical decision-making is therefore limited for handheld cardiac ultrasound (HCU), which is often collected by novice users. In this study, we developed a view-agnostic deep learning framework to estimate left ventricular ejection fraction (LVEF), patient age, and patient sex from any of several views containing the left ventricle. Model performance was: (1) consistently strong across retrospective transthoracic echocardiography (TTE) datasets; (2) comparable between prospective HCU versus TTE (625 patients; LVEF r2 0.80 vs. 0.86, LVEF [> or [≤]40%] AUC 0.981 vs. 0.993, age r2 0.85 vs. 0.87, sex classification AUC 0.985 vs. 0.996); (3) comparable between prospective HCU data collected by experts versus novice users (100 patients; LVEF r2 0.78 vs. 0.66, LVEF AUC 0.982 vs. 0.966). This approach may broaden the clinical utility of echocardiography by lessening the need for user expertise in image acquisition.

Topics

cardiovascular medicine

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.