Clinician-Centric Explainable Artificial Intelligence Framework for Medical Imaging Diagnostics: A Systematic Review.
Authors
Affiliations (5)
Affiliations (5)
- Department of Software Engineering, Federal University of Technology, Owerri, Imo State, Nigeria, futa.edu.ng.
- Department of Surveying and Geoinformatics, Federal University of Technology, Owerri, Imo State, Nigeria, futa.edu.ng.
- St. George Specialist Hospital, Effurun, Delta State, Nigeria.
- Asokoro District Hospital, Abuja, Federal Capital Territory, Nigeria.
- Department of Cyber Security, Cardiff School of Technologies, Cardiff Metropolitan University, Cardiff, Wales, UK, cardiffmet.ac.uk.
Abstract
Medical imaging has evolved from conventional x-rays to advanced digital modalities, with artificial intelligence (AI), particularly deep learning, showing an increasingly central role in diagnostic support. This study presents a systematic literature review (SLR) of AI-driven medical imaging research focusing on classification-based models and explainability approaches in pneumonia detection. Using predefined inclusion criteria and PRISMA-guided screening, 95 studies were synthesized to identify dominant architectures, dataset trends, performance patterns, and persistent challenges. The analysis shows that convolutional neural networks (CNNs) and their variants remain the most frequently adopted models, accounting for the largest proportion of applications across x-ray, computed tomography scan (CT scan), and magnetic resonance imaging (MRI). Reported diagnostic performance across reviewed studies commonly exceeded 90% in accuracy and AUC, with models such as DeepMediX, XNet, Wavelet-CNN, and RadCLIP demonstrating strong predictive capability in their respective experimental settings. However, the review identifies significant gaps in explainability, clinical workflow integration, ethical compliance, and trust evaluation. Thus, this paper proposes a clinician-centric explainable artificial intelligence (CC-XAI) framework derived from literature synthesis. The framework integrates multilevel explainability, contextual clinical alignment, and human-in-the-loop feedback mechanisms to bridge the gap between black-box AI systems and real-world clinical practice. Rather than introducing a new predictive model, the framework provides a structured design blueprint for embedding explainability into medical imaging diagnostics. The findings highlight the continued dominance of deep learning in medical imaging while emphasizing the urgent need for clinician-oriented XAI frameworks to support transparency, trust, and responsible AI deployment in healthcare.