Back to all papers

Distinct visual biases affect humans and artificial intelligence in medical imaging diagnoses.

December 22, 2025pubmed logopapers

Authors

McLeod GA,Stanley EAM,Rosenal T,Forkert ND

Affiliations (10)

  • Department of Neurology and Neurological Sciences, Stanford University, Palo Alto, CA, USA. [email protected].
  • Department of Clinical Neurosciences, University of Calgary, Calgary, AB, Canada. [email protected].
  • Biomedical Engineering Graduate Program, University of Calgary, Calgary, AB, Canada.
  • Department of Radiology, University of Calgary, Calgary, AB, Canada.
  • Hotchkiss Brain Institute, University of Calgary, Calgary, AB, Canada.
  • Alberta Children's Hospital Research Institute, University of Calgary, Calgary, AB, Canada.
  • Department of Critical Care Medicine, University of Calgary, Calgary, AB, Canada.
  • Centre for Health Informatics, University of Calgary, Calgary, AB, Canada.
  • School of Health Information Science, University of Victoria, Victoria, BC, Canada.
  • Department of Clinical Neurosciences, University of Calgary, Calgary, AB, Canada.

Abstract

Artificial intelligence (AI) systems can detect subtle features in diagnostic imaging scans that radiologists may miss, including higher-order features that lack obvious visual correlates. This may enable earlier disease detection and non-invasive lesion phenotyping, but also introduces risks due to AI's reliance on correlations rather than causation, potential demographic and technical biases, and uninterpretable reasoning. This perspective explores how radiologists and AI learn to perceive details in medical images differently, leading to potential discrepancies in medical decision-making.

Topics

Journal ArticleReview

Ready to Sharpen Your Edge?

Subscribe to join 7,500+ peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.