An Interpretable Hybrid AI Model for Breast Fine Needle Aspiration Cytology Image Classification.
Authors
Affiliations (4)
Affiliations (4)
- Department of Computer Science & Engineering, Girijananda Chowdhury University, Guwahati, India.
- Mathematical and Computational Sciences Division, Institute of Advanced Study in Science and Technology (IASST), Guwahati, India. [email protected].
- Arya Wellness Centre, Guwahati, India.
- Department of Computer Science, Gauhati University, Guwahati, India.
Abstract
While Fine needle aspiration cytology (FNAC) and mammography are both used to diagnose breast lesions, FNAC is generally more accurate than mammograms for predicting breast cancer. It is also gaining popularity as an early detection tool due to its rapid and straightforward procedure, cost-effectiveness, and minimal risk of complications. Deep learning enhances breast cancer detection by extracting crucial features, yielding highly accurate results compared to conventional techniques. Classical machine learning is less time-intensive and requires fewer parameter adjustments. This work is presented as a proof-of-concept study on FNAC images obtained from two centers. It explores eighteen hybrid architectures that are developed and evaluated, combining the strength of deep learning techniques- Inception-V3, MobileNet-V2, and DenseNet-121 for feature extraction, with three machine learning classifiers (Support Vector Machine, Decision Tree, and k-Nearest Neighbours) for binary classification of fine needle aspiration cytology images of the breast. Our study is based on an indigenously collected dataset of 427 images (152 benign and 275 malignant), which was later expanded through augmentation to 2,866 images (1216 benign and 1,650 malignant). The hybrid model, which combines feature extraction from MobileNet-V2 and DenseNet-121 in a concatenated architecture, achieves the highest internal test accuracy of 98.26% when paired with an SVM classifier. It also achieves the best-known sensitivity (97.95%) and specificity (98.48%). The explainability model, which utilizes Grad-CAM, achieved 95% positive clinical validation by expert pathologists, underscoring the model's trustworthiness and interpretability-critical for clinical adoption and decision-making support. The proposed hybrid model, with its impressive metrics and validation rate, underscores the model's ability to provide clear, interpretable insights that support clinical decision-making.