Federated Learning Framework for Privacy-Preserving Explainable AI-Driven Clinical Decision-Making.
Authors
Abstract
The application of artificial intelligence (AI) in clinical diagnostics has shown substantial potential; however, conventional centralized learning frameworks often encounter critical limitations related to patient data privacy, data heterogeneity, and limited generalizability. To address these challenges, we propose a novel Federated Deep Learning (FDL) framework tailored for privacy-preserving, AI-driven clinical decision support. The proposed architecture integrates Vision Transformers (ViT) with DINOv2-based self-supervised learning to enable effective representation learning in the absence of extensive labeled datasets. Furthermore, personalized model updates are facilitated using Federated Self-Supervised Learning (FedSSL) in conjunction with FedProx, ensuring client-specific adaptation in non-identically distributed (non-IID) data environments. Privacy preservation is ensured through the application of differential privacy mechanisms at the model update level, coupled with Elliptic Curve Cryptography (ECC) for secure communication. To enhance clinical transparency and interpretability, the framework incorporates Grad-CAM and LIME for sample-level explainability. The proposed system is evaluated on three publicly available medical imaging datasets encompassing Tuberculosis (TB) detection from chest X-rays, Diabetic Retinopathy (DR) from fundus images, and Brain Tumor (BT) classification from MRI scans. The federated model achieved an accuracy and F1-score of 99.80% for Tuberculosis, 89.0% for Diabetic Retinopathy (DR), and 97.1% for Brain Tumors (BTs), reflecting high overall diagnostic performance across all tasks. These findings validate the efficacy, scalability, and privacy-resilience of the proposed method, positioning it as a robust candidate for real-world clinical deployment in distributed healthcare environments.