
A York University-led study identifies that continual and transfer learning strategies can mitigate harmful data shifts in clinical AI models used in hospitals.
Key Details
- 1Data shifts between training and real-world hospital data can cause patient harm and model unreliability.
- 2Researchers analyzed 143,049 patient encounters from seven hospitals in Toronto using the GEMINI data network.
- 3Significant data shifts were observed between community and academic hospitals, with transfer of models from community to academic settings leading to more harm.
- 4Transfer learning and drift-triggered continual learning approaches improved model robustness and prevented performance drops, especially during the COVID-19 pandemic.
- 5A label-agnostic monitoring pipeline was proposed to detect and address harmful data shifts for safe, equitable AI deployment.
Why It Matters

Source
EurekAlert
Related News

AI Time Series Model Boosts EEG-Based Seizure Prediction by 44%
UC Santa Cruz engineers' 'future-guided' deep learning improves seizure prediction accuracy using EEG data.

NTU Singapore to Launch Master's in AI in Medicine for Clinicians and Technologists
NTU Singapore will launch a new MSc in Artificial Intelligence in Medicine to train clinicians and technologists in clinical AI applications from 2026.

AI Accurately Predicts Lymph Node Extension in HPV-related Throat Cancer via CT
An AI pipeline automates lymph node segmentation and extranodal extension prediction from CT in HPV-positive oropharyngeal cancer, correlating with patient outcomes.