Evaluating both AI algorithms and human users is key for safe adoption in high-stakes healthcare settings, according to an Ohio State study.
Key Details
- 1Researchers studied 462 nursing students and professionals using AI-assisted patient monitoring simulations.
- 2Accurate AI predictions improved decision making by up to 60%, but inaccurate AI caused over a 100% drop in correct decisions.
- 3Explanations and supporting data had minimal impact; participants were highly influenced by AI predictions.
- 4The study calls for simultaneous evaluation of both algorithms and clinical users in safety-critical settings.
- 5Findings appear in npj Digital Medicine (DOI: 10.1038/s41746-025-01784-y).
Why It Matters
This work highlights the risks of over-reliance on AI in clinical workflows and underscores the necessity for robust, joint evaluation protocols in radiology and other safety-critical healthcare applications, ensuring that human-AI teams can handle both good and poor system performance.

Source
EurekAlert
Related News

•EurekAlert
AI Dramatically Improves Prediction of Delivery Timing from Ultrasound Images
Ultrasound AI's study validates advanced AI for predicting delivery timing using standard ultrasound images.

•EurekAlert
AI-Assisted Colonoscopies May Reduce Clinicians’ Detection Skills, Study Finds
Routine use of AI in colonoscopies linked to decreased skill in adenoma detection by clinicians without AI assistance.

•EurekAlert
AI Voice Analysis Shows Promise for Early Laryngeal Cancer Detection
Researchers demonstrated AI could detect early laryngeal cancer from voice recordings, distinguishing it from benign conditions.