Accessing AI mammography reports impacts patient follow-up behaviors: the unintended consequences of including AI in patient portals.
Authors
Affiliations (7)
Affiliations (7)
- The Warren Alpert Medical School of Brown University, Providence, RI, USA.
- The Brown Radiology, Psychology, and Law Lab, Brown University Health, Providence, RI, USA.
- Rhode Island Hospital, Brown University Health, Providence, RI, USA.
- Department of Radiology, Warren Alpert Medical School of Brown University, Providence, RI, USA.
- The Brown Radiology, Psychology, and Law Lab, Brown University Health, Providence, RI, USA. [email protected].
- Rhode Island Hospital, Brown University Health, Providence, RI, USA. [email protected].
- Department of Radiology, Warren Alpert Medical School of Brown University, Providence, RI, USA. [email protected].
Abstract
Although artificial intelligence (AI) tools are growing in breast imaging, little research has examined how to communicate AI findings in patient portals. English-speaking US women with ≥1 prior mammogram (n = 1623) were randomized to one of thirteen conditions. All participants viewed a radiologist report negative for breast cancer; twelve conditions included an AI report with one of four AI scores (0, 29 [no suspicion of cancer]; 31, 50 [suspicion of cancer]), presented alone, with an abnormality cutoff threshold, or with both the threshold and the AI's False Discovery Rate (FDR) or False Omission Rate (FOR). Participants reported whether they would consult an attorney for litigation if advanced cancer were detected one year later (primary outcome). Secondary outcomes included follow-up decisions, concern for cancer, and trust. Litigation was higher when AI was included, especially when AI concluded suspicion of cancer. Providing the FDR/FOR reduced litigation; similar effects were observed for follow-up behaviors.