Back to all papers

Explainable Deep Learning for Glaucoma Detection: A DenseNet121-Based Classification with Grad-CAM Visualization

Authors

Pathmakumara, H. C.,Perera, G.

Affiliations (1)

  • NSBM Green University

Abstract

One of the main causes of permanent blindness in the globe, glaucoma frequently advances symptomlessly until it reaches an advanced stage. Recent developments in artificial intelligence (AI) have demonstrated promise in automating glaucoma screening by retinal fundus imaging, which is essential for preventing vision loss. This study uses the publicly accessible ACRIMA dataset to offer a deep learning-based methodology for classifying glaucoma. In order to solve the inherent imbalance in the dataset, the methodology optimizes data augmentation and class balancing while using transfer learning with a DenseNet121 backbone. The model outperformed a number of current techniques with a validation accuracy of 90.16% and a ROC AUC score of 0.976. Grad-CAM and Grad-CAM++ were combined to display decision-critical areas in fundus pictures in order to guarantee clinical interpretability. In accordance with clinical diagnostic procedures, these explainability methodologies verified that the model continuously concentrated on the optic disc and neuroretinal rim. Precision, recall, F1-score, confusion matrix, and ROC analysis were used in a thorough examination. The proposed system shows great promise for practical implementation in environments with limited resources and transportable screening units. Multi-modal imaging integration, data set expansion, and the use of modern explanatory frameworks are examples of future improvements that will further enhance generalizability and clinical reliability.

Topics

health informatics

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.