Back to all papers

Artificial intelligence and computer-aided diagnosis in diagnostic decisions: 5 questions for medical informatics and human-computer interface research.

Authors

Brunyé TT,Mitroff SR,Elmore JG

Affiliations (3)

  • Center for Applied Brain and Cognitive Sciences, Tufts University, Medford, MA 02155, United States.
  • Department of Psychological & Brain Sciences, The George Washington University, Washington, DC 20006, United States.
  • Department of Medicine and the National Clinician Scholar Program, David Geffen School of Medicine, University of California, Los Angeles, Los Angeles, CA 90095, United States.

Abstract

Artificial intelligence (AI) has the potential to transform medical informatics by supporting clinical decision-making, reducing diagnostic errors, and improving workflows and efficiency. However, successful integration of AI-based decision support systems depends on careful consideration of human-AI collaboration, trust, skill maintenance, and automation bias. This work proposes five central questions to guide future research in medical informatics and human-computer interface (HCI). We focus on AI-based clinical decision support systems, including computer vision algorithms for medical imaging (radiology, pathology), natural language processing for structured and unstructured electronic health record (EHR) data, and rule-based systems. Relevant data modalities include clinician-acquired images, EHR text, and increasingly, patient-generated content in telehealth contexts. We review existing evidence regarding diagnostic errors across specialties, the effectiveness and risks of AI tools in reducing perceptual and interpretive errors, and the human factors influencing diagnostic decision-making in AI-enabled contexts. We synthesize insights from medicine, cognitive science, and HCI to identify gaps in knowledge and propose five key questions for continued research. Diagnostic errors remain common across medicine, with AI offering potential to reduce both perceptual and interpretive errors. However, the impact of AI depends critically on how and when information is presented. Studies indicate that delayed or toggleable cues may outperform immediate ones, but attentional capture, overreliance, and bias remain significant risks. Explainable AI provides transparency but can also bias decisions. Long-term reliance on AI may erode clinician skills, particularly for trainees and in low-prevalence contexts. Historical failures of computer-aided diagnosis in mammography highlight these challenges. Effective AI integration requires human-centered and adaptive design. Five central research questions address: (1) what type and format of information AI should provide; (2) when information should be presented; (3) how explainable AI affects diagnostic decisions; (4) how AI influences automation bias and complacency; and (5) the risks of skill decay due to reliance on AI. Each question underscores the importance of balancing efficiency, accuracy, and clinician expertise while mitigating bias and skill degradation. AI holds promise for improving diagnostic accuracy and efficiency, but realizing its potential requires post-deployment evaluation, equitable access, clinician oversight, and targeted training. AI must complement, rather than replace, human expertise, ensuring safe, effective, and sustainable integration into diagnostic decision-making. Addressing these challenges proactively can maximize AI's potential across healthcare and other high-stakes domains.

Topics

Journal Article

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.