Back to all papers

Explainable Hybrid Deep Learning Framework Integrating MobileNetV2, EfficientNetV2B0, and KNN for MRI-Based Brain Tumor Classification.

January 23, 2026pubmed logopapers

Authors

Adamu MJ,Qiang L,Nyatega CO,Fahad M,Younis A,Jabire AH,Zakariyya RS,Kawuwa HB

Affiliations (8)

  • School of Microelectronics, Tianjin University, Tianjin, 300072, China. [email protected].
  • Department of Computer Science, Yobe State University, Damaturu, 600213, Nigeria. [email protected].
  • School of Microelectronics, Tianjin University, Tianjin, 300072, China.
  • Department of Electronics and Telecommunication Engineering, Mbeya University of Science and Technology, Mbeya, Tanzania.
  • School of Electrical and Information Engineering, Tianjin University, Tianjin, 300072, China.
  • Department of Electrical and Electronic Engineering, Taraba State University, Jalingo, 660213, Nigeria.
  • Smart Medical Devices Unit, Jinhua Research Institute of Zhejiang University, Jinhua, China.
  • Department of Biomedical Engineering, School of Precision Instruments and Opto-electronics Engineering, Tianjin University, Tianjin, 300072, China.

Abstract

Magnetic resonance imaging (MRI) is central to noninvasive brain tumor assessment, yet clinical uptake of artificial intelligence depends on both accuracy and transparency. This study presents a lightweight and interpretable hybrid framework that fuses features from two efficient convolutional backbones, MobileNetV2 and EfficientNetV2B0, using late fusion with global average pooling and vector concatenation. Classification is performed with a K‑Nearest Neighbors (KNN) head configured with k = 5, Euclidean distance, and distance‑based weighting. The dataset contains 7,023 MRI images drawn from Figshare, SARTAJ, and BR35H. Data were split with a 20% held‑out test set and a validation set equal to 20% of the remaining training pool, yielding 64%/16%/20% train/val/test. Four diagnostic categories were evaluated: Glioma, Meningioma, Pituitary, and Notumor. The confusion matrix shows a compact diagonal, and class‑wise precision, recall, and F1 are consistently high on the test set. A 5‑fold cross‑validation with normality assessment and paired significance testing supports robustness across folds. On the held‑out test set, class‑wise ROC-AUC was 1.00 for all four classes, and overall accuracy was 99.69%. Results should be interpreted in light of the unified dataset; external validation is warranted. Clinical interpretability is supported by class‑wise Grad‑CAM overlays and SHAP analyses, including waterfall plots that quantify individual feature contributions. These findings indicate that a dual‑backbone late‑fusion design coupled with a simple nonparametric classifier delivers strong, balanced performance while providing anatomically plausible case‑level insight into model decisions.

Topics

Journal Article

Ready to Sharpen Your Edge?

Subscribe to join 11k+ peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.