DSSCC net enhanced skin cancer classification using SMOTE Tomek and optimized convolutional neural network.
Authors
Affiliations (4)
Affiliations (4)
- Department of Software Engineering, University of Sargodha, Sargodha, Pakistan.
- Department of Computer Science and IT, Superior University, Sargodha Campus, Sargodha, Pakistan.
- Department of Computer Science, Bacha Khan University, Charsadda, Pakistan.
- Department of Computer Science, Kardan University, Kabul, Afghanistan. [email protected].
Abstract
Skin cancer remains a major global health concern where early detection can significantly improve treatment outcomes. Traditional methods rely on expert evaluation, which can be prone to errors. DSSCC-Net, a deep CNN model integrated with SMOTE-Tomek oversampling, improves classification accuracy and effectively handles class imbalance in dermoscopic datasets. Trained and validated on the HAM10000, ISIC 2018 and PH2 datasets, DSSCC-Net achieved an average accuracy of 97.82% ± 0.37%, precision of 97%, recall of 97% and an AUC of 99.43%. Additional analysis using Grad-CAM and expert-labeled masks validated the model's explainability. DSSCC-Net demonstrates state-of-the-art performance and readiness for real-world clinical integration. Current CNN-based models struggle with accurately classifying underrepresented skin lesion classes due to dataset imbalances and fail to achieve consistently high performance across diverse populations. There is a pressing need for a robust, efficient, and interpretable model to aid dermatologists in early and accurate diagnosis. This study proposes DSSCC-Net, a novel deep learning framework that integrates an optimized CNN architecture with the SMOTE-Tomek technique to address class imbalance. The model processes dermoscopic images from the HAM10000 dataset, resized to 28×28 pixels, and employs data augmentation, dropout layers, and ReLU activation to enhance feature extraction and reduce overfitting. Performance is evaluated using metrics such as accuracy, precision, recall, F1-score, and AUC, alongside Grad-CAM for interpretability. DSSCC-Net achieves a 98% classification accuracy, outperforming state-of-the-art models like VGG-16 (91.12%), ResNet-152 (89.32%), and EfficientNet-B0 (89.46%). The SMOTE-Tomek integration significantly improves minority-class detection, yielding an AUC of 99.43%. The model also demonstrates balanced precision (97%) and recall (97%), with a low loss value (0.1677), indicating strong generalization. DSSCC-Net sets a new benchmark for skin cancer classification by effectively addressing class imbalance and computational limitations. Its high interpretability, achieved through Grad-CAM, makes it a practical tool for clinical deployment. Future work includes extending this framework to other medical imaging domains and developing real-time diagnostic applications.