Distinct visual biases affect humans and artificial intelligence in medical imaging diagnoses.
Authors
Affiliations (10)
Affiliations (10)
- Department of Neurology and Neurological Sciences, Stanford University, Palo Alto, CA, USA. [email protected].
- Department of Clinical Neurosciences, University of Calgary, Calgary, AB, Canada. [email protected].
- Biomedical Engineering Graduate Program, University of Calgary, Calgary, AB, Canada.
- Department of Radiology, University of Calgary, Calgary, AB, Canada.
- Hotchkiss Brain Institute, University of Calgary, Calgary, AB, Canada.
- Alberta Children's Hospital Research Institute, University of Calgary, Calgary, AB, Canada.
- Department of Critical Care Medicine, University of Calgary, Calgary, AB, Canada.
- Centre for Health Informatics, University of Calgary, Calgary, AB, Canada.
- School of Health Information Science, University of Victoria, Victoria, BC, Canada.
- Department of Clinical Neurosciences, University of Calgary, Calgary, AB, Canada.
Abstract
Artificial intelligence (AI) systems can detect subtle features in diagnostic imaging scans that radiologists may miss, including higher-order features that lack obvious visual correlates. This may enable earlier disease detection and non-invasive lesion phenotyping, but also introduces risks due to AI's reliance on correlations rather than causation, potential demographic and technical biases, and uninterpretable reasoning. This perspective explores how radiologists and AI learn to perceive details in medical images differently, leading to potential discrepancies in medical decision-making.