
A York University-led study identifies that continual and transfer learning strategies can mitigate harmful data shifts in clinical AI models used in hospitals.
Key Details
- 1Data shifts between training and real-world hospital data can cause patient harm and model unreliability.
- 2Researchers analyzed 143,049 patient encounters from seven hospitals in Toronto using the GEMINI data network.
- 3Significant data shifts were observed between community and academic hospitals, with transfer of models from community to academic settings leading to more harm.
- 4Transfer learning and drift-triggered continual learning approaches improved model robustness and prevented performance drops, especially during the COVID-19 pandemic.
- 5A label-agnostic monitoring pipeline was proposed to detect and address harmful data shifts for safe, equitable AI deployment.
Why It Matters

Source
EurekAlert
Related News

AI Model Accurately Predicts Blood Loss Risk in Liposuction
A machine learning model predicts blood loss during high-volume liposuction with 94% accuracy.

AI-Driven CT Tool Predicts Cancer Spread in Oropharyngeal Tumors
Researchers have created an AI tool that uses CT imaging to predict the spread risk of oropharyngeal cancer, offering improved treatment stratification.

AI Model PRTS Predicts Spatial Transcriptomics From H&E Histology Images
Researchers developed PRTS, a deep learning model that infers single-cell spatial transcriptomics from standard H&E-stained tissue images.