Is a score enough? Pitfalls and solutions for AI severity scores.
Authors
Affiliations (5)
Affiliations (5)
- Department of Diagnostic Imaging, Brown Radiology Human Factors Lab, Rhode Island Hospital, Warren Alpert School of Medicine of Brown University, Providence, RI, USA. [email protected].
- Department of Radiology and Imaging Sciences, Emory University, School of Medicine, Atlanta, GA, USA.
- Penn State College of Medicine, The Milton S. Hershey Medical Center, Penn State Health, Hershey, PA, USA.
- Translational Laboratory for Cardiothoracic Imaging and Artificial Intelligence, Emory University, Atlanta, GA, USA.
- Department of Diagnostic Imaging, Brown Radiology Human Factors Lab, Rhode Island Hospital, Warren Alpert School of Medicine of Brown University, Providence, RI, USA.
Abstract
Severity scores, which often refer to the likelihood or probability of a pathology, are commonly provided by artificial intelligence (AI) tools in radiology. However, little attention has been given to the use of these AI scores, and there is a lack of transparency into how they are generated. In this comment, we draw on key principles from psychological science and statistics to elucidate six human factors limitations of AI scores that undermine their utility: (1) variability across AI systems; (2) variability within AI systems; (3) variability between radiologists; (4) variability within radiologists; (5) unknown distribution of AI scores; and (6) perceptual challenges. We hypothesize that these limitations can be mitigated by providing the false discovery rate and false omission rate for each score as a threshold. We discuss how this hypothesis could be empirically tested. KEY POINTS: The radiologist-AI interaction has not been given sufficient attention. The utility of AI scores is limited by six key human factors limitations. We propose a hypothesis for how to mitigate these limitations by using false discovery rate and false omission rate.