Evaluating both AI algorithms and human users is key for safe adoption in high-stakes healthcare settings, according to an Ohio State study.
Key Details
- 1Researchers studied 462 nursing students and professionals using AI-assisted patient monitoring simulations.
- 2Accurate AI predictions improved decision making by up to 60%, but inaccurate AI caused over a 100% drop in correct decisions.
- 3Explanations and supporting data had minimal impact; participants were highly influenced by AI predictions.
- 4The study calls for simultaneous evaluation of both algorithms and clinical users in safety-critical settings.
- 5Findings appear in npj Digital Medicine (DOI: 10.1038/s41746-025-01784-y).
Why It Matters

Source
EurekAlert
Related News

MD Anderson Unveils New AI Genomics Insights and Therapeutic Advances
MD Anderson reports breakthroughs in cancer therapeutics and provides critical insights into AI models for genomic analysis.

UCLA Researchers Present AI, Blood Biomarker Advances at SABCS 2025
UCLA Health researchers unveil major advances in breast cancer AI pathology, liquid biopsy, and biomarker strategies at the 2025 SABCS.

SH17 Dataset Boosts AI Detection of PPE for Worker Safety
University of Windsor researchers released SH17, a 8,099-image open dataset for AI-driven detection of personal protective equipment (PPE) in manufacturing settings.