Evaluating both AI algorithms and human users is key for safe adoption in high-stakes healthcare settings, according to an Ohio State study.
Key Details
- 1Researchers studied 462 nursing students and professionals using AI-assisted patient monitoring simulations.
- 2Accurate AI predictions improved decision making by up to 60%, but inaccurate AI caused over a 100% drop in correct decisions.
- 3Explanations and supporting data had minimal impact; participants were highly influenced by AI predictions.
- 4The study calls for simultaneous evaluation of both algorithms and clinical users in safety-critical settings.
- 5Findings appear in npj Digital Medicine (DOI: 10.1038/s41746-025-01784-y).
Why It Matters
This work highlights the risks of over-reliance on AI in clinical workflows and underscores the necessity for robust, joint evaluation protocols in radiology and other safety-critical healthcare applications, ensuring that human-AI teams can handle both good and poor system performance.

Source
EurekAlert
Related News

•EurekAlert
AI Model Accurately Predicts Blood Loss Risk in Liposuction
A machine learning model predicts blood loss during high-volume liposuction with 94% accuracy.

•EurekAlert
AI-Driven CT Tool Predicts Cancer Spread in Oropharyngeal Tumors
Researchers have created an AI tool that uses CT imaging to predict the spread risk of oropharyngeal cancer, offering improved treatment stratification.

•EurekAlert
AI Model PRTS Predicts Spatial Transcriptomics From H&E Histology Images
Researchers developed PRTS, a deep learning model that infers single-cell spatial transcriptomics from standard H&E-stained tissue images.