Back to all papers

A view-flexible deep learning framework for automated analysis of 2D echocardiography.

April 1, 2026pubmed logopapers

Authors

Anisuzzaman DM,Malins JG,Jackson JI,Lee E,Naser JA,Rostami B,Bird JG,Spiegelstein D,Amar T,Ngo CC,Oh JK,Pellikka PA,Thaden JJ,Lopez-Jimenez F,Poterucha TJ,Friedman PA,Pislaru SV,Kane GC,Attia ZI

Affiliations (3)

  • Department of Cardiovascular Medicine, Mayo Clinic, Rochester, MN, US.
  • UltraSight Ltd., Rehovot, Israel.
  • Department of Cardiovascular Medicine, Mayo Clinic, Rochester, MN, US. [email protected].

Abstract

Echocardiography traditionally requires experienced operators to select and interpret clips from specific viewing angles. Clinical decision-making is therefore limited for handheld cardiac ultrasound (HCU), which is often collected by novice users. In this study, we developed a view-flexible deep learning framework to estimate left ventricular ejection fraction (LVEF), patient age, and patient sex from any of several views containing the left ventricle. Model performance was: (1) consistently strong across retrospective transthoracic echocardiography (TTE) datasets; (2) comparable between prospective HCU versus TTE (625 patients; LVEF r<sup>2</sup> 0.80 vs. 0.86, LVEF [> or ≤40%] AUC 0.981 vs. 0.993, age r<sup>2</sup> 0.85 vs. 0.87, sex classification AUC 0.985 vs. 0.996); (3) comparable between prospective HCU data collected by experts versus novice users (100 patients; LVEF r<sup>2</sup> 0.77 vs. 0.64, LVEF AUC 0.983 vs. 0.968). This approach may broaden the clinical utility of echocardiography by lessening the need for user expertise in image acquisition. Created in BioRender. Malins, J. (2026) https://BioRender.com/lhw4pb1 .

Topics

Journal Article

Ready to Sharpen Your Edge?

Subscribe to join 11k+ peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.