
A York University-led study identifies that continual and transfer learning strategies can mitigate harmful data shifts in clinical AI models used in hospitals.
Key Details
- 1Data shifts between training and real-world hospital data can cause patient harm and model unreliability.
- 2Researchers analyzed 143,049 patient encounters from seven hospitals in Toronto using the GEMINI data network.
- 3Significant data shifts were observed between community and academic hospitals, with transfer of models from community to academic settings leading to more harm.
- 4Transfer learning and drift-triggered continual learning approaches improved model robustness and prevented performance drops, especially during the COVID-19 pandemic.
- 5A label-agnostic monitoring pipeline was proposed to detect and address harmful data shifts for safe, equitable AI deployment.
Why It Matters

Source
EurekAlert
Related News

NIH-Backed AI Model Predicts Cancer Survival Using Single-Cell Data
Researchers have developed scSurvival, a machine learning tool that uses single-cell tumor data to accurately predict cancer patient survival and identify high-risk cell populations.

Deep Learning Pathomics Platform Improves Immunotherapy Prediction in Lung Cancer
A deep learning pathomics platform accurately predicts immunotherapy response in metastatic NSCLC using routine pathology slides.

AI Pathology Model Outperforms PD-L1 in Predicting NSCLC Immunotherapy Response
MD Anderson's Path-IO machine learning platform accurately predicts immunotherapy responses in metastatic non-small cell lung cancer, surpassing current biomarker standards.