
A York University-led study identifies that continual and transfer learning strategies can mitigate harmful data shifts in clinical AI models used in hospitals.
Key Details
- 1Data shifts between training and real-world hospital data can cause patient harm and model unreliability.
- 2Researchers analyzed 143,049 patient encounters from seven hospitals in Toronto using the GEMINI data network.
- 3Significant data shifts were observed between community and academic hospitals, with transfer of models from community to academic settings leading to more harm.
- 4Transfer learning and drift-triggered continual learning approaches improved model robustness and prevented performance drops, especially during the COVID-19 pandemic.
- 5A label-agnostic monitoring pipeline was proposed to detect and address harmful data shifts for safe, equitable AI deployment.
Why It Matters

Source
EurekAlert
Related News

Deep Learning AI Outperforms Clinic Prognostics for Colorectal Cancer Recurrence
A new deep learning model using histopathology images identifies recurrence risk in stage II colorectal cancer more effectively than standard clinical predictors.

AI Reveals Key Health System Levers for Cancer Outcomes Globally
AI-based analysis identifies the most impactful policy and resource factors for improving cancer survival across 185 countries.

Dual-Branch Graph Attention Network Predicts ECT Success in Teen Depression
Researchers developed a dual-branch graph attention network that uses structural and functional MRI data to accurately predict individual responses to electroconvulsive therapy in adolescents with major depressive disorder.