Back to all papers

Explainable Artificial Intelligence (AI) for Medical Imaging: A Framework for Bridging the AI Trust Gap.

May 13, 2026pubmed logopapers

Authors

Savage CH,Sulam J,Huang CM,Yi PH

Affiliations (5)

  • Department of Diagnostic Radiology & Nuclear Medicine, University of Maryland School of Medicine, Baltimore, MD.
  • Department of Computer Science, Johns Hopkins University, Baltimore, MD.
  • Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD.
  • Malone Center for Engineering in Health Care, Johns Hopkins University, Baltimore, MD.
  • Department of Radiology, St Jude Children's Research Hospital, Memphis, TN.

Abstract

Artificial intelligence (AI) is increasingly used in healthcare but often lacks clinician and patient trust. Explainable AI (XAI) aims to clarify predictions and to make AI decisions more transparent, interpretable, and clinically actionable. Yet, current methods fall short. In this Perspective, we argue that, for XAI to be clinically useful in medical imaging and to build trust with clinicians, it must satisfy three guiding principles: technical robustness, adaptation to end users, and alignment of explanations with the specific clinical task. We introduce a conceptual framework, incorporating these principles, to guide future XAI design and deployment based on expectations and shared responsibilities for developers, vendors, and healthcare institutions. By ensuring robustness, personalizing outputs, and aligning explanations with use cases, XAI can move beyond one-size-fits-all approaches to task- and user-centered design, to support effective and trustworthy AI adoption in healthcare.

Topics

Journal Article

Ready to Sharpen Your Edge?

Subscribe to join 11k+ peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.