Back to all papers

LBNet: an optimized lightweight CNN for mammographic breast cancer classification with XAI-based interpretability.

December 17, 2025pubmed logopapers

Authors

Ahmmed J,Ahmed F,Kabir MA,Ahad MT,Jadoon MA,Rehman AU,Bermak A

Affiliations (7)

  • Department of Computer Science and Engineering, Daffodil International University, Dhaka, Bangladesh.
  • Department of Software Engineering, Daffodil International University, Dhaka, Bangladesh, Bangladesh.
  • Department of Computer Science and Engineering, Daffodil International University, Dhaka, Bangladesh. [email protected].
  • Department of Management Information Systems, Independent University, Dhaka, Bangladesh.
  • College of Public Policy, Hamad Bin Khalifa University, Doha, Qatar.
  • College of Science and Engineering, Hamad Bin Khalifa University, Doha, Qatar. [email protected].
  • College of Science and Engineering, Hamad Bin Khalifa University, Doha, Qatar.

Abstract

Breast cancer represents a major worldwide health burden, marked by high incidence and mortality rates across diverse socioeconomic populations. While deep learning has enabled advances in automated mammographic analysis, existing models often suffer from high computational complexity. They also face limited generalizability and a lack of interpretability. To overcome these challenges, this research introduces LBNet, a lightweight and interpretable convolutional neural network (CNN) built for accurate and efficient breast cancer detection, particularly in resource-constrained settings. With only 2.4 million trainable parameters, LBNet consists of five convolutional layers, leveraging ReLU activation, batch normalization, and max-pooling to optimize feature extraction while maintaining computational efficiency. LBNet was trained on the RSNA dataset using the Adam optimizer and five-fold cross-validation. It achieved 97.28% accuracy. For cancer cases, precision was 99% and recall was 96%. For non-cancer cases, precision was 96% and recall was 99%. In comparison, baseline models such as VGG19, SE-ResNet152, and ResNet152V2 yielded lower accuracies of 87.54%, 87.50%, and 85.24%, respectively, while transfer learning approaches peaked at 87.37% accuracy. LBNet's generalizability was validated in external datasets, achieving 99.54% accuracy on CBIS-DDSM and 98.50% on MIAS. To enhance clinical trust, this work integrated SHAP (SHapley Additive exPlanations) and Grad-CAM (Gradient-weighted Class Activation Mapping). These methods effectively highlighted diagnostically relevant regions in mammograms. This improved prediction transparency. LBNet demonstrates strong potential as an accurate, efficient, and interpretable solution for breast cancer screening, and future studies could explore its extension to multi-view mammography and real-time clinical deployment.

Topics

Journal Article

Ready to Sharpen Your Edge?

Subscribe to join 7,300+ peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.