
A York University-led study identifies that continual and transfer learning strategies can mitigate harmful data shifts in clinical AI models used in hospitals.
Key Details
- 1Data shifts between training and real-world hospital data can cause patient harm and model unreliability.
- 2Researchers analyzed 143,049 patient encounters from seven hospitals in Toronto using the GEMINI data network.
- 3Significant data shifts were observed between community and academic hospitals, with transfer of models from community to academic settings leading to more harm.
- 4Transfer learning and drift-triggered continual learning approaches improved model robustness and prevented performance drops, especially during the COVID-19 pandemic.
- 5A label-agnostic monitoring pipeline was proposed to detect and address harmful data shifts for safe, equitable AI deployment.
Why It Matters

Source
EurekAlert
Related News

Major Study Reveals Barriers to Implementing AI Chest Diagnostics in NHS Hospitals
A UCL-led study identifies significant challenges in deploying AI tools for chest diagnostics across NHS hospitals in England.

AI Model Enhances Prediction of Infection Risks from Oral Mucositis in Stem Cell Transplant Patients
Researchers developed an explainable AI tool that accurately predicts infection risks related to oral mucositis in hematopoietic stem cell transplant patients.

AI-Enabled Hydrogel Patch Provides Long-Term High-Fidelity EEG and Attention Monitoring
Researchers unveil a reusable hydrogel patch with machine learning capabilities for high-fidelity EEG recording and attention assessment.