Evaluating both AI algorithms and human users is key for safe adoption in high-stakes healthcare settings, according to an Ohio State study.
Key Details
- 1Researchers studied 462 nursing students and professionals using AI-assisted patient monitoring simulations.
- 2Accurate AI predictions improved decision making by up to 60%, but inaccurate AI caused over a 100% drop in correct decisions.
- 3Explanations and supporting data had minimal impact; participants were highly influenced by AI predictions.
- 4The study calls for simultaneous evaluation of both algorithms and clinical users in safety-critical settings.
- 5Findings appear in npj Digital Medicine (DOI: 10.1038/s41746-025-01784-y).
Why It Matters

Source
EurekAlert
Related News

AI-Enabled Hydrogel Patch Provides Long-Term High-Fidelity EEG and Attention Monitoring
Researchers unveil a reusable hydrogel patch with machine learning capabilities for high-fidelity EEG recording and attention assessment.

AI and Imaging Reveal Atomic Order in 2D Nanomaterials
A multi-university team has uncovered how atomic order and disorder in 2D MXene nanomaterials can be predicted and tailored using AI, enabled by advanced imaging analysis.

DreamConnect AI Translates and Edits fMRI Brain Activity into Images
Researchers unveil DreamConnect, an AI system that reconstructs and edits visual imagery from fMRI brain data with language prompts.