Evaluating both AI algorithms and human users is key for safe adoption in high-stakes healthcare settings, according to an Ohio State study.
Key Details
- 1Researchers studied 462 nursing students and professionals using AI-assisted patient monitoring simulations.
- 2Accurate AI predictions improved decision making by up to 60%, but inaccurate AI caused over a 100% drop in correct decisions.
- 3Explanations and supporting data had minimal impact; participants were highly influenced by AI predictions.
- 4The study calls for simultaneous evaluation of both algorithms and clinical users in safety-critical settings.
- 5Findings appear in npj Digital Medicine (DOI: 10.1038/s41746-025-01784-y).
Why It Matters

Source
EurekAlert
Related News

MIT Introduces Interactive AI System for Fast Medical Image Annotation
MIT researchers have developed MultiverSeg, an interactive AI tool enabling efficient, user-driven segmentation of biomedical image datasets without prior model training.

Study Finds Gaps in FDA Safety Reporting for AI Medical Devices
A study highlights insufficient standardized safety and efficacy assessments for FDA-cleared AI/ML medical devices.

UCLA Unveils Light-Based AI System for Energy-Efficient Image Generation
Researchers at UCLA have developed an optical generative AI model that creates images using minimal energy and computational steps.