Post-hoc eXplainable AI methods for analyzing medical images of gliomas (- A review for clinical applications).
Authors
Affiliations (6)
Affiliations (6)
- Center for Precision Engineering, Materials and Manufacturing Research (PEM), Faculty of Engineering and Design, Atlantic Technological university, F91 YW50, Sligo, Ireland; MathematicalesModelling and Intelligent Systems for Health and Environment (MISHE), Faculty of Engineering and Design, Atlantic Technological University, Sligo, Ireland; Faculty of Engineering and Design, Atlantic Technological University, F91 YW50, Sligo, Ireland.
- Institute of Biomedical Engineering, Boğaziçi University, Istanbul, Turkey.
- School of Biomedical Engineering & Imaging Sciences, King's College London, Rayne Institute, 4th Floor, Lambeth Wing, London, SE1 7EH, UK; Department of Neuroradiology, Ruskin Wing, King's College Hospital NHS Foundation Trust, London, SE5 9RS, UK.
- Center for Precision Engineering, Materials and Manufacturing Research (PEM), Faculty of Engineering and Design, Atlantic Technological university, F91 YW50, Sligo, Ireland; Faculty of Engineering and Design, Atlantic Technological University, F91 YW50, Sligo, Ireland.
- Department of Computer Science and Applied Physics, Atlantic Technological university, Galway, Ireland.
- Center for Precision Engineering, Materials and Manufacturing Research (PEM), Faculty of Engineering and Design, Atlantic Technological university, F91 YW50, Sligo, Ireland; MathematicalesModelling and Intelligent Systems for Health and Environment (MISHE), Faculty of Engineering and Design, Atlantic Technological University, Sligo, Ireland; Faculty of Engineering and Design, Atlantic Technological University, F91 YW50, Sligo, Ireland. Electronic address: [email protected].
Abstract
Deep learning (DL) has shown promise in glioma imaging tasks using magnetic resonance imaging (MRI) and histopathology images, yet their complexity demands greater transparency in artificial intelligence (AI) systems. This is noticeable when users must understand the model output for a clinical application. In this systematic review, 65 post-hoc eXplainable AI (XAI), or interpretable AI studies, have been reviewed that provide an understanding of why a system generated a given output for tasks related to glioma imaging. A framework of post-hoc XAI methods, such as Gradient-based XAI (G-XAI) and Perturbation-based XAI (P-XAI), is introduced to evaluate deep models and explain their application in gliomas. The papers on XAI techniques in gliomas are surveyed and categorized by their specific aims such as grading, genetic biomarker detection, localization, intra-tumoral heterogeneity assessment, and survival analysis, and their XAI approach. This review highlights the growing integration of XAI in glioma imaging, demonstrating their role in bridging AI decision-making and medical diagnostics. The co-occurrence analysis emphasizes their role in enhancing model transparency and trust and guiding future research toward more reliable clinical applications. Finally, the current challenges associated with DL and XAI approaches and their clinical integration are discussed with an outlook on future opportunities from clinical users' perspectives and upcoming trends in XAI.