
A York University-led study identifies that continual and transfer learning strategies can mitigate harmful data shifts in clinical AI models used in hospitals.
Key Details
- 1Data shifts between training and real-world hospital data can cause patient harm and model unreliability.
- 2Researchers analyzed 143,049 patient encounters from seven hospitals in Toronto using the GEMINI data network.
- 3Significant data shifts were observed between community and academic hospitals, with transfer of models from community to academic settings leading to more harm.
- 4Transfer learning and drift-triggered continual learning approaches improved model robustness and prevented performance drops, especially during the COVID-19 pandemic.
- 5A label-agnostic monitoring pipeline was proposed to detect and address harmful data shifts for safe, equitable AI deployment.
Why It Matters

Source
EurekAlert
Related News

FDA Approves Johns Hopkins AI Tool for Early Sepsis Detection
FDA clears an AI-driven system developed by Johns Hopkins to detect sepsis up to 48 hours earlier and reduce mortality rates.

New AI Vision-Language Model Enhances Chest CT Diagnostics
Researchers developed an interpretable AI model that uses visual question answering to generate detailed diagnostic findings from chest CT scans, aimed at improving lung cancer diagnosis.

Optical AI Chip Boosts Real-Time Dry Eye Gland Diagnosis Accuracy
A new metasurface spectral AI chip enables rapid, accurate diagnosis of meibomian gland dysfunction (MGD) from tissue samples, achieving 96.22% accuracy.