
A York University-led study identifies that continual and transfer learning strategies can mitigate harmful data shifts in clinical AI models used in hospitals.
Key Details
- 1Data shifts between training and real-world hospital data can cause patient harm and model unreliability.
- 2Researchers analyzed 143,049 patient encounters from seven hospitals in Toronto using the GEMINI data network.
- 3Significant data shifts were observed between community and academic hospitals, with transfer of models from community to academic settings leading to more harm.
- 4Transfer learning and drift-triggered continual learning approaches improved model robustness and prevented performance drops, especially during the COVID-19 pandemic.
- 5A label-agnostic monitoring pipeline was proposed to detect and address harmful data shifts for safe, equitable AI deployment.
Why It Matters

Source
EurekAlert
Related News

MD Anderson Unveils New AI Genomics Insights and Therapeutic Advances
MD Anderson reports breakthroughs in cancer therapeutics and provides critical insights into AI models for genomic analysis.

UCLA Researchers Present AI, Blood Biomarker Advances at SABCS 2025
UCLA Health researchers unveil major advances in breast cancer AI pathology, liquid biopsy, and biomarker strategies at the 2025 SABCS.

SH17 Dataset Boosts AI Detection of PPE for Worker Safety
University of Windsor researchers released SH17, a 8,099-image open dataset for AI-driven detection of personal protective equipment (PPE) in manufacturing settings.