Evaluating both AI algorithms and human users is key for safe adoption in high-stakes healthcare settings, according to an Ohio State study.
Key Details
- 1Researchers studied 462 nursing students and professionals using AI-assisted patient monitoring simulations.
- 2Accurate AI predictions improved decision making by up to 60%, but inaccurate AI caused over a 100% drop in correct decisions.
- 3Explanations and supporting data had minimal impact; participants were highly influenced by AI predictions.
- 4The study calls for simultaneous evaluation of both algorithms and clinical users in safety-critical settings.
- 5Findings appear in npj Digital Medicine (DOI: 10.1038/s41746-025-01784-y).
Why It Matters

Source
EurekAlert
Related News

Deep Learning AI Outperforms Clinic Prognostics for Colorectal Cancer Recurrence
A new deep learning model using histopathology images identifies recurrence risk in stage II colorectal cancer more effectively than standard clinical predictors.

AI Reveals Key Health System Levers for Cancer Outcomes Globally
AI-based analysis identifies the most impactful policy and resource factors for improving cancer survival across 185 countries.

Dual-Branch Graph Attention Network Predicts ECT Success in Teen Depression
Researchers developed a dual-branch graph attention network that uses structural and functional MRI data to accurately predict individual responses to electroconvulsive therapy in adolescents with major depressive disorder.