WSDC-ViT: a novel transformer network for pneumonia image classification based on windows scalable attention and dynamic rectified linear unit convolutional modules.
Authors
Affiliations (8)
Affiliations (8)
- School of Digital and Intelligent Industry, Inner Mongolia University of Science and Technology, Baotou, 014010, China. [email protected].
- Information Engineering College, Hebei University of Architecture, Zhangjiakou, 075000, China. [email protected].
- School of Digital and Intelligent Industry, Inner Mongolia University of Science and Technology, Baotou, 014010, China. [email protected].
- School of Digital and Intelligent Industry, Inner Mongolia University of Science and Technology, Baotou, 014010, China.
- School of Automation and Electrical Engineering, Inner Mongolia University of Science and Technology, Baotou, 014040, China.
- School of Information and Electronics, Beijing Institute of Technology, Beijing, 100081, China.
- College of Information Engineering, Inner Mongolia University of Technology, Hohhot, 010051, China.
- School of Computer Science and Technology, Baotou Medical College, Inner Mongolia University of Science and Technology, Baotou, 014040, China.
Abstract
Accurate differential diagnosis of pneumonia remains a challenging task, as different types of pneumonia require distinct treatment strategies. Early and precise diagnosis is crucial for minimizing the risk of misdiagnosis and for effectively guiding clinical decision-making and monitoring treatment response. This study proposes the WSDC-ViT network to enhance computer-aided pneumonia detection and alleviate the diagnostic workload for radiologists. Unlike existing models such as Swin Transformer or CoAtNet, which primarily improve attention mechanisms through hierarchical designs or convolutional embedding, WSDC-ViT introduces a novel architecture that simultaneously enhances global and local feature extraction through a scalable self-attention mechanism and convolutional refinement. Specifically, the network integrates a scalable self-attention mechanism that decouples the query, key, and value dimensions to reduce computational overhead and improve contextual learning, while an interactive window-based attention module further strengthens long-range dependency modeling. Additionally, a convolution-based module equipped with a dynamic ReLU activation function is embedded within the transformer encoder to capture fine-grained local details and adaptively enhance feature expression. Experimental results demonstrate that the proposed method achieves an average classification accuracy of 95.13% and an F1-score of 95.63% on a chest X-ray dataset, along with 99.36% accuracy and a 99.34% F1-score on a CT dataset. These results highlight the model's superior performance compared to existing automated pneumonia classification approaches, underscoring its potential clinical applicability.