Back to all papers

CLEAR: An Auditable Foundation Model for Radiology Grounded in Clinical Concepts

January 17, 2026medrxiv logopreprint

Authors

Han, T.,Wu, R.,Tian, Y.,Khader, F.,Adams, L.,Bressem, K.,Davatzikos, C.,Kather, J. N.,Shen, L.,Mankoff, D.,Barbosa, E.,Truhn, D.

Affiliations (1)

  • University of Pennsylvania

Abstract

"Black box" deep learning models for medical image interpretation limit clinical trust and analysis of performance degradation. Here, we introduce Concept-Level Embeddings for Auditable Radiology (CLEAR), an auditable foundation model based on clinical concepts. Trained on over 0.87 million image-report pairs from 239,091 patients, CLEAR learns a visual representation and projects chest X-rays into a semantically rich space defined by large language model embeddings, making every prediction traceable to specific radiological observations. External validation on four large, physician-annotated datasets from the United States, Europe, and Asia shows that CLEAR not only achieves state-of-the-art classification performance but also enables novel applications: auditable zero-shot pathology detection, systematic identification of radiological confounders, and the creation of expert-level concept bottleneck models from data-driven concepts. By integrating clinical knowledge directly into its reasoning process, CLEAR offers a framework for robust model auditing, safer deployment, and enhanced physician-AI collaboration, advancing towards trustworthy medical AI.

Topics

health informatics

Ready to Sharpen Your Edge?

Subscribe to join 9,300+ peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.