An efficient dual path deep learning framework for COVID-19 classification using lung CT scans with explainable AI.
Authors
Affiliations (4)
Affiliations (4)
- Department of Electrical and Electronic Engineering, Green University of Bangladesh, Purbachal American City, Rupganj, Narayanganj, 1461, Bangladesh.
- Department of Electrical and Electronic Engineering, Bangladesh University of Business and Technology, Dhaka, 1216, Bangladesh.
- American International University-Bangladesh, Dhaka, 1229, Bangladesh.
- Department of Electrical Engineering, College of Engineering, Qassim University, Buraydah, 52571, Saudi Arabia. [email protected].
Abstract
While the global burden of COVID-19 has eased due to widespread vaccination and public health efforts, the virus has not been eradicated. New variants continue to emerge, and localized outbreaks remain a concern, particularly in regions with limited healthcare resources. This highlights the ongoing need for rapid, accurate, and scalable diagnostic tools. In this study, a comprehensive deep learning framework for detecting COVID-19 from lung CT scans is presented, aimed at improving diagnostic reliability and computational efficiency. An extensive and diverse CT dataset was curated by combining images from nine publicly available datasets, with a total of 25,408 samples in COVID-19 and normal classes. Multiple state-of-the-art convolutional neural networks (CNNs) and vision transformer models were fine-tuned and evaluated under consistent conditions to build a strong performance benchmark. Based on these findings, a new lightweight parallel model was developed, combining a custom CNN and a pretrained backbone. Both networks process the input image independently, and their extracted features are fused at the final stage for classification. The proposed model demonstrated higher accuracy (97.46%) compared to other models tested in this study, while maintaining low computational complexity. Additionally, explainable AI techniques, including Grad-CAM and LIME, were employed to provide visual interpretations of the model’s predictions.