Back to all papers

Boosting brain tumor detection with an optimized ResNet and explainability via Grad-CAM and LIME.

December 5, 2025pubmed logopapers

Authors

Afnaan K,Arunbalaji CG,Singh T,Kumar R,Naik GR

Affiliations (5)

  • Department of Computer Science and Engineering, Amrita School of Computing, Amrita Vishwa Vidyapeetham, Bangalore, Karnataka, India.
  • Department of Computer Science and Engineering, Amrita School of Computing, Amrita Vishwa Vidyapeetham, Bangalore, Karnataka, India. [email protected].
  • Amrita School of Medicine, Amrita Vishwa Vidyapeetham, Faridabad, India.
  • Centre for Artificial Intelligence Research and Optimization (AIRO), Torrens University, Ultimo, NSW, 2007, Australia.
  • Design and Creative Technology Vertical, Torrens University, Wakefield Street, Adelaide, SA, 5000, Australia.

Abstract

Detecting Brain Tumors is essential in medical imaging, as early and accurate diagnosis significantly improves treatment decisions and patient outcomes. Convolutional Neural Networks have demonstrated high efficiency in this domain, but their lack of interpretability remains a significant drawback for clinical adoption. This study explores the integration of Explainability techniques to enhance transparency in CNN-based classification and improve model performance through advanced optimization strategies. The primary research question addressed is how to improve the accuracy, generalization, and interpretability of CNNs for brain tumor Detection. While previous studies have demonstrated the effectiveness of deep learning for tumor detections, challenges such as class imbalance and overfitting of CNNs persist. To bridge this gap, we employ different dynamic learning rate modifiers, perform architectural enhancements, and apply XAI techniques, including Grad-CAM and LIME. Our experiments are conducted on three publicly available multiclass tumor datasets to ensure the generalizability of the proposed approach. Among the tested architectures, the enhanced ResNet model consistently outperformed others across all datasets, achieving the highest test accuracy, ranging from 99.36% to 99.65%. The techniques such as unfreezing layers, integrating various blocks, pooling, and dropout layers enhanced feature refinement and reduced overfitting. By incorporating XAI, we improve model interpretability, ensuring that clinically relevant regions in MRI scans are highlighted. These advancements contribute to highly reliable AI-assisted diagnostics, addressing significant challenges in medical image classification.

Topics

Journal Article

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.