Back to all papers

Use of Expected Utility to Evaluate Artificial Intelligence-Enabled Rule-out Devices for Mammography Screening.

November 24, 2025pubmed logopapers

Authors

Fan KL,Thompson YLE,Chen W,Abbey CK,Samuelson FW

Affiliations (2)

  • US Food and Drug Administration, Silver Spring, MD, USA.
  • Department of Psychological and Brain Sciences, UC Santa Barbara, Santa Barbara, CA, USA.

Abstract

BackgroundAn artificial intelligence (AI)-enabled rule-out device may autonomously remove patient images unlikely to have cancer from radiologist review. Many published studies evaluate this type of device by retrospectively applying the AI to large datasets and use sensitivity and specificity as the performance metrics. However, these metrics have fundamental shortcomings because sensitivity will always be negatively affected in retrospective studies of rule-out applications of AI.MethodWe reviewed 2 performance metrics to compare the screening performance between the radiologist-with-rule-out-device and radiologist-without-device workflows: positive/negative predictive values (PPV/NPV) and expected utility (EU). We applied both methods to a recent study that reported improved performance in the radiologist-with-device workflow using a retrospective US dataset. We then applied the EU method to a European study based on the reported recall and cancer detection rates at different AI thresholds to compare the potential utility among different thresholds.ResultsFor the US study, neither PPV/NPV nor EU can demonstrate significant improvement for any of the algorithm thresholds reported. For the study using European data, we found that EU is lower as AI rules out more patients including false-negative cases and reduces the overall screening performance.ConclusionsDue to the nature of the retrospective simulated study design, sensitivity and specificity can be ambiguous in evaluating a rule-out device. We showed that using PPV/NPV or EU can resolve the ambiguity. The EU method can be applied with only recall rates and cancer detection rates, which is convenient as ground truth is often unavailable for nonrecalled patients in screening mammography.HighlightsSensitivity and specificity can be ambiguous metrics for evaluating a rule-out device in a retrospective setting. PPV and NPV can resolve the ambiguity but require the ground truth for all patients. Based on utility theory, expected utility (EU) is a potential metric that helps demonstrate improvement in screening performance due to a rule-out device using large retrospective datasets.We applied EU to a recent study that used a large retrospective mammography screening dataset from the United States. That study reported an improvement in specificity and decrease in sensitivity when using their AI as a rule-out device retrospectively. In terms of EU, we cannot conclude a significant improvement when the AI is used as a rule-out device.We applied the method to a European study that reported only recall rates and cancer detection rates. Since there is no established EU baseline value in European mammography screening workflow, we estimated the EU baseline using data from previous literature. We cannot conclude a significant improvement when the AI is used as a rule-out device for the European study.In this work, we investigated the use of EU to evaluate rule-out devices using large retrospective datasets. This metric, used with retrospective clinical data, could be used as supporting evidence for rule-out devices.

Topics

Journal Article

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.