Comparative analysis of transformer-based deep learning models for glioma and meningioma classification.

Authors

Nalentzi K,Gerogiannis K,Bougias H,Stogiannos N,Papavasileiou P

Affiliations (5)

  • Biomedical Sciences Department, Radiology-Radiotherapy Sector, University of West Attica, Athens, Greece. Electronic address: [email protected].
  • Electrical Computer Engineering Department, Aristotle University of Thessaloniki, Thessaloniki, Greece.
  • Molecular Imaging Department, Ioannina University Hospital, Ioannina, Greece.
  • Honorary Research Fellow, Department of Midwifery & Radiography, City St George's, University of London Northampton Square, London, UK; Magnitiki Tomografia Kerkyras, Corfu, Greece.
  • Biomedical Sciences Department, Radiology-Radiotherapy Sector, University of West Attica, Athens, Greece.

Abstract

This study compares the classification accuracy of novel transformer-based deep learning models (ViT and BEiT) on brain MRIs of gliomas and meningiomas through a feature-driven approach. Meta's Segment Anything Model was used for semi-automatic segmentation, therefore proposing a total neural network-based workflow for this classification task. ViT and BEiT models were finetuned to a publicly available brain MRI dataset. Gliomas/meningiomas cases (625/507) were used for training and 520 cases (260/260; gliomas/meningiomas) for testing. The extracted deep radiomic features from ViT and BEiT underwent normalization, dimensionality reduction based on the Pearson correlation coefficient (PCC), and feature selection using analysis of variance (ANOVA). A multi-layer perceptron (MLP) with 1 hidden layer, 100 units, rectified linear unit activation, and Adam optimizer was utilized. Hyperparameter tuning was performed via 5-fold cross-validation. The ViT model achieved the highest AUC on the validation dataset using 7 features, yielding an AUC of 0.985 and accuracy of 0.952. On the independent testing dataset, the model exhibited an AUC of 0.962 and an accuracy of 0.904. The BEiT model yielded an AUC of 0.939 and an accuracy of 0.871 on the testing dataset. This study demonstrates the effectiveness of transformer-based models, especially ViT, for glioma and meningioma classification, achieving high AUC scores and accuracy. However, the study is limited by the use of a single dataset, which may affect generalizability. Future work should focus on expanding datasets and further optimizing models to improve performance and applicability across different institutions. This study introduces a feature-driven methodology for glioma and meningioma classification, showcasing advancements in the accuracy and model robustness of transformer-based models.

Topics

Journal Article

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.