Back to all papers

EchoAgent: guideline-centric reasoning agent for echocardiography measurement and interpretation.

May 15, 2026pubmed logopapers

Authors

Daghyani M,Wang L,Hashemi N,Medhat B,Abdelsamad B,Rojas Velez E,Li X,Tsang MYC,Luong C,Abolmaesumi P,Tsang TSM

Affiliations (4)

  • University of British Columbia, Vancouver, British Columbia, Canada. [email protected].
  • University of British Columbia, Vancouver, British Columbia, Canada. [email protected].
  • University of British Columbia, Vancouver, British Columbia, Canada.
  • Vancouver General Hospital, Vancouver, British Columbia, Canada.

Abstract

Echocardiographic interpretation requires video-level reasoning and guideline-based measurement analysis, which current deep learning models for cardiac ultrasound do not support. We present EchoAgent, a framework that enables structured, interpretable automation for this domain. EchoAgent orchestrates specialized vision tools under large language model (LLM) control to perform temporal localization, spatial measurement, and clinical interpretation. A key contribution is a measurement-feasibility prediction model that determines whether anatomical structures are reliably measurable in each frame, enabling autonomous tool selection. We curated a benchmark of diverse, clinically validated video-query pairs for evaluation. To assess robustness across institutions, we further evaluate EchoAgent on a curated subset of the publicly available MIMIC-IV-EchoQA benchmark, specifically targeting questions answerable via linear measurements to remain within the current framework's scope. EchoAgent outperforms current medical VLMs and cardiac foundation models in video-level reasoning, demonstrating superior accuracy and interpretability on both our internal benchmark and the external MIMIC-IV-EchoQA subset. Outputs are grounded in visual evidence and clinical guidelines, supporting transparency and traceability. This work demonstrates the feasibility of agentic, guideline-aligned reasoning for echocardiographic video analysis, enabled by task-specific tools and full video-level automation. EchoAgent provides a framework for enhancing transparency and guideline-adherence, representing a step toward more trustworthy AI in cardiac ultrasound. Our code will be made publicly available at https://github.com/DeepRCL/EchoAgent .

Topics

Journal Article

Ready to Sharpen Your Edge?

Subscribe to join 11k+ peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.