Back to all papers

HyperGraph-based capsule temporal memory network for efficient and explainable diabetic retinopathy detection in retinal imaging.

December 3, 2025pubmed logopapers

Authors

Sushith M,Malligeswari N,Anlin Sahaya Infant Tinu M,Jaiganesh M

Affiliations (4)

  • Department of Information Technology, Adithya Institute of Technology, Kurumbapalayam, Coimbatore, 641107, Tamil Nadu, India. [email protected].
  • Department of Electronics and Communication Engineering, Easwari Engineering college, Ramapuram, Chennai, 600089, Tamil Nadu, India.
  • Department of Biomedical Engineering, Rohini College of Engineering and Technology, Anjugramam, 629401, Tamil Nadu, India.
  • Department of Information Technology, Karpagam College of Engineering, Coimbatore, 641032, Tamil Nadu, India.

Abstract

Diabetic retinopathy (DR) is a chronic complication of diabetes in which the retinal damage may cause vision impairment or blindness if left untreated. The challenges in DR detection are mostly due to the morphological variations of retinal lesions, e.g., microaneurysms, hemorrhages, and exudates, and the imaging condition variability between different clinical environments. Current state of the art deep learning models like convolutional neural network (CNN), recurrent neural network (RNN) and transformer-based architectures are computationally expensive, not robust to noisy datasets and have limitation on interpretability, which makes them difficult to deploy in real world clinical settings. This research offers HyperGraph Capsule Temporal Network (HGCTN), a deep learning framework to address these limitations and to create an accurate, scalable, and interpretable DR detection. Combining hypergraph neural networks for strong modeling of higher order spatial relationships between retinal lesions, capsule networks for permitting hierarchical structuring of feature and memorizing distributed routing place into temporal capsular memory unit (TCMU) for maintaining both long term and short termed temporal dependencies we propose HGCTN, a model that integrates all the methodologies to efficaciously track disease progression. Meta learning techniques and noise injection strategies are used to improve adaptability of the model and thus make the model more resilient to real world image variations. On DRIVE and Diabetic Retinopathy datasets, HGCTN is validated experimentally, and the best accuracy is 99.0% (HDCTN) and 98.8% (ADTATC), while existing models like TAHDL (96.7%) and ADTATC (98.2%) are outperformed. Furthermore, the model has a recall of 100% and 99.8% on DRIVE and the Diabetic Retinopathy dataset, respectively, with a specificity of 99.7% and 99.6%, respectively, and thus has almost no false negatives and a high reliability in identifying DR cases. Hypergraph attention maps and capsule activation images are additionally relied on to validate the model's interpretability as they offer explainable predictions to a clinical audience. HGCTN has high classification accuracy, reduced computational complexity and better generalization than the existing models, which makes it a new benchmark for DR detection, solving the key deficiency of the existing models and laying the foundation for the real-world deployment of the automated ophthalmic diagnosis systems.

Topics

Journal Article

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.