Evaluating both AI algorithms and human users is key for safe adoption in high-stakes healthcare settings, according to an Ohio State study.
Key Details
- 1Researchers studied 462 nursing students and professionals using AI-assisted patient monitoring simulations.
- 2Accurate AI predictions improved decision making by up to 60%, but inaccurate AI caused over a 100% drop in correct decisions.
- 3Explanations and supporting data had minimal impact; participants were highly influenced by AI predictions.
- 4The study calls for simultaneous evaluation of both algorithms and clinical users in safety-critical settings.
- 5Findings appear in npj Digital Medicine (DOI: 10.1038/s41746-025-01784-y).
Why It Matters
This work highlights the risks of over-reliance on AI in clinical workflows and underscores the necessity for robust, joint evaluation protocols in radiology and other safety-critical healthcare applications, ensuring that human-AI teams can handle both good and poor system performance.

Source
EurekAlert
Related News

•EurekAlert
BraDiPho: New 3D AI Atlas Integrates Brain Dissections with MRI
Researchers have developed BraDiPho, a tool that merges ex-vivo photogrammetric brain dissection data with in-vivo MRI tractography using AI.

•EurekAlert
AI Maps Genetic Factors Shaping the Corpus Callosum via MRI Scans
USC researchers used AI to analyze MRI scans and uncover the genetic architecture of the brain's corpus callosum.

•EurekAlert
WashU Launches AI Imaging Center to Advance Precision Diagnostics
Washington University establishes a new center to develop AI-powered imaging tools for better diagnosis and precision medicine.