Enhanced retinal blood vessel segmentation via loss balancing in dense generative adversarial networks with quick attention mechanisms.
Authors
Affiliations (4)
Affiliations (4)
- Department of Information Technology, MLR Institute of Technology, Hyderabad, Telangana, India. [email protected].
- Department of Computer Science and Engineering, SVR Engineering College, affiliated with JNTUA, Nandyala, Andhra Pradesh, 518502, India.
- Department of Computer Science and Engineering, Amrita School of Computing, Amrita Vishwa Vidyapeetham, Chennai, Tamil Nadu, India.
- Department of Physics, Saveetha School of Engineering, Saveetha Institute of Medical and Technical Science (SIMATS), Thandalam, Chennai, 602105, India.
Abstract
Manual segmentation of retinal blood vessels in fundus images has been widely used for detecting vascular occlusion, diabetic retinopathy, and other retinal conditions. However, existing automated methods face challenges in accurately segmenting fine vessels and optimizing loss functions effectively. This study aims to develop an integrated framework that enhances vessel segmentation accuracy and robustness for clinical applications. The proposed pipeline integrates multiple advanced techniques to address the limitations of current approaches. In preprocessing, Quasi-Cross Bilateral Filtering (QCBF) is applied to reduce noise and enhance vessel visibility. Feature extraction is performed using a Directed Acyclic Graph Neural Network with VGG16 (DAGNN-VGG16) for hierarchical and topologically-aware representation learning. Segmentation is achieved using a Dense Generative Adversarial Network with Quick Attention Network (Dense GAN-QAN), which balances loss and emphasizes critical vessel features. To further optimize training convergence, the Swarm Bipolar Algorithm (SBA) is employed for loss minimization. The method was evaluated on three benchmark retinal vessel segmentation datasets-CHASE-DB1, STARE, and DRIVE-using sixfold cross-validation. The proposed approach achieved consistently high performance with mean results of accuracy: 99.87%, F1- score: 99.82%, precision: 99.84%, recall: 99.78%, and specificity: 99.87% across all datasets, demonstrating strong generalization and robustness. The integrated QCBF-DAGNN-VGG16-Dense GAN-QAN-SBA framework advances the state-of-the-art in retinal vessel segmentation by effectively handling fine vessel structures and ensuring optimized training. Its consistently high performance across multiple datasets highlights its potential for reliable clinical deployment in retinal disease detection and diagnosis.