Back to all papers

Explainable deep learning-based lung cancer diagnosis using clinically-guided local interpretable model-agnostic explanations.

April 9, 2026pubmed logopapers

Authors

Hassan SU,Abdulkadir SJ,Alhussian HS,Fayyaz AM,Al-Selwi SM,Khan U,Ismail AOA

Affiliations (6)

  • Department of Computing, Universiti Teknologi PETRONAS, Seri Iskandar, Malaysia. [email protected].
  • Centre for Intelligent Signal & Imaging Research (CISIR), Universiti Teknologi PETRONAS, Seri Iskandar, Malaysia. [email protected].
  • Department of Computing, Universiti Teknologi PETRONAS, Seri Iskandar, Malaysia.
  • Center for Research in Data Science (CeRDaS), Universiti Teknologi PETRONAS, Seri Iskandar, Malaysia.
  • Faculty of Business & Technology, University of Cyberjaya (UoC), Persiaran Bestari, Cyber 11, 63000, Cyberjaya, Selangor, Malaysia.
  • Unit of Specializations of Technology and Engineering, Applied College, King Khalid University, 61913, Muhayil, Saudi Arabia.

Abstract

Lung cancer remains one of the leading causes of cancer-related deaths worldwide, highlighting the urgent need for accurate and interpretable diagnostic tools. While deep learning (DL) models have achieved strong results in medical image classification, their opaque decision-making process remains a barrier to clinical adoption. This study proposes an adaptive superpixel perturbation-based local interpretable model-agnostic explanations (ASP-LIME), a novel explanation framework designed to generate faithful and localized interpretations of DL predictions, providing insights into the model's decision-making process. The proposed approach improves upon the original local interpretable model-agnostic explanations method by introducing adaptive superpixel segmentation, stratified perturbation strategies, lung region masking, and post-processing enhancements tailored for medical imaging. The proposed framework is applied to a lung cancer classification task using a custom-designed convolutional neural network, MedDeepNet, as the predictive model. Experimental results on a publicly available lung image dataset demonstrate that MedDeepNet achieves 99.84% accuracy, 99.66% recall, 99.82% precision, 99.74% specificity, and a 99.74% F1-score. ASP-LIME produces high-fidelity explanations with strong localization to pathological regions, achieving scores of 0.0300 for deletion, 0.9622 for insertion, and 0.9661 for Area Between Perturbation Curves (ABPC), surpassing typical benchmarks for interpretability methods. The findings demonstrate that the proposed framework offers consistent and interpretable explanations that enhance understanding of model decisions in medical imaging applications.

Topics

Journal Article

Ready to Sharpen Your Edge?

Subscribe to join 11k+ peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.