Evaluating both AI algorithms and human users is key for safe adoption in high-stakes healthcare settings, according to an Ohio State study.
Key Details
- 1Researchers studied 462 nursing students and professionals using AI-assisted patient monitoring simulations.
- 2Accurate AI predictions improved decision making by up to 60%, but inaccurate AI caused over a 100% drop in correct decisions.
- 3Explanations and supporting data had minimal impact; participants were highly influenced by AI predictions.
- 4The study calls for simultaneous evaluation of both algorithms and clinical users in safety-critical settings.
- 5Findings appear in npj Digital Medicine (DOI: 10.1038/s41746-025-01784-y).
Why It Matters

Source
EurekAlert
Related News

FDA Approves Johns Hopkins AI Tool for Early Sepsis Detection
FDA clears an AI-driven system developed by Johns Hopkins to detect sepsis up to 48 hours earlier and reduce mortality rates.

New AI Vision-Language Model Enhances Chest CT Diagnostics
Researchers developed an interpretable AI model that uses visual question answering to generate detailed diagnostic findings from chest CT scans, aimed at improving lung cancer diagnosis.

Optical AI Chip Boosts Real-Time Dry Eye Gland Diagnosis Accuracy
A new metasurface spectral AI chip enables rapid, accurate diagnosis of meibomian gland dysfunction (MGD) from tissue samples, achieving 96.22% accuracy.