
A York University-led study identifies that continual and transfer learning strategies can mitigate harmful data shifts in clinical AI models used in hospitals.
Key Details
- 1Data shifts between training and real-world hospital data can cause patient harm and model unreliability.
- 2Researchers analyzed 143,049 patient encounters from seven hospitals in Toronto using the GEMINI data network.
- 3Significant data shifts were observed between community and academic hospitals, with transfer of models from community to academic settings leading to more harm.
- 4Transfer learning and drift-triggered continual learning approaches improved model robustness and prevented performance drops, especially during the COVID-19 pandemic.
- 5A label-agnostic monitoring pipeline was proposed to detect and address harmful data shifts for safe, equitable AI deployment.
Why It Matters

Source
EurekAlert
Related News

Chinese Researchers Unveil Photonic Chip for Ultra-Fast Image Processing
A new photonic chip achieves image processing at 25 million frames per second with high energy efficiency, promising major advances in real-time imaging and AI applications.

AI Model Predicts Growth Spurts from Pediatric Neck X-rays for Orthodontics
Korean researchers developed an AI system (ARNet-v2) that predicts children's growth spurts from neck X-rays to enhance orthodontic treatment planning.

Dana-Farber Showcases AI and Clinical Trial Advances at ESMO 2025
Dana-Farber researchers present major cancer clinical trial results, including AI-driven data analysis, at ESMO Congress 2025.