Reproducible meningioma grading across multi-center MRI protocols via hybrid radiomic and deep learning features.
Authors
Affiliations (13)
Affiliations (13)
- Faculty of Pharmacy, Middle East University, 11831, Amman, Jordan.
- Ahl al Bayt University, Kerbala, Iraq.
- Marwadi University Research Center, Department of Chemical Engineering, Faculty of Engineering & Technology, Marwadi University Research Center, Marwadi University, Gujarat, 360003, Rajkot, India.
- Department of Computer Engineering and Application, GLA University Mathura, 281406, Mathura, India.
- Department of Chemistry and Biochemistry, School of Sciences, JAIN (Deemed to be University), Bangalore, Karnataka, India.
- Chitkara Centre for Research and Development, Chitkara University, Himachal Pradesh, 174103, Baddi, India.
- Department of Chemistry, Sathyabama Institute of Science and Technology, Chennai, Tamil Nadu, India.
- Department of Public Health and Healthcare Management, Samarkand State Medical University, 18 Amir Temur Street, Samarkand, Uzbekistan.
- College of Nursing, National University of Science and Technology, Dhi Qar, Iraq.
- Pharmacy College, Al-Farahidi University, Baghdad, Iraq.
- Department of Pharmacy, Al-Zahrawi University College, Karbala, Iraq.
- Gilgamesh Ahliya University, Baghdad, Iraq.
- Department of Medical Physics and Radiology, Faculty of Paramedical Sciences, Kashan University of Medical Sciences, Kashan, Islamic Republic of Iran. [email protected].
Abstract
This study aimed to create a reliable method for preoperative grading of meningiomas by combining radiomic features and deep learning-based features extracted using a 3D autoencoder. The goal was to utilize the strengths of both handcrafted radiomic features and deep learning features to improve accuracy and reproducibility across different MRI protocols. The study included 3,523 patients with histologically confirmed meningiomas, consisting of 1,900 low-grade (Grade I) and 1,623 high-grade (Grades II and III) cases. Radiomic features were extracted from T1-contrast-enhanced and T2-weighted MRI scans using the Standardized Environment for Radiomics Analysis (SERA). Deep learning features were obtained from the bottleneck layer of a 3D autoencoder integrated with attention mechanisms. Feature selection was performed using Principal Component Analysis (PCA) and Analysis of Variance (ANOVA). Classification was done using machine learning models like XGBoost, CatBoost, and stacking ensembles. Reproducibility was evaluated using the Intraclass Correlation Coefficient (ICC), and batch effects were harmonized with the ComBat method. Performance was assessed based on accuracy, sensitivity, and the area under the receiver operating characteristic curve (AUC). For T1-contrast-enhanced images, combining radiomic and deep learning features provided the highest AUC of 95.85% and accuracy of 95.18%, outperforming models using either feature type alone. T2-weighted images showed slightly lower performance, with the best AUC of 94.12% and accuracy of 93.14%. Deep learning features performed better than radiomic features alone, demonstrating their strength in capturing complex spatial patterns. The end-to-end 3D autoencoder with T1-contrast images achieved an AUC of 92.15%, accuracy of 91.14%, and sensitivity of 92.48%, surpassing T2-weighted imaging models. Reproducibility analysis showed high reliability (ICC > 0.75) for 127 out of 215 features, ensuring consistent performance across multi-center datasets. The proposed framework effectively integrates radiomic and deep learning features to provide a robust, non-invasive, and reproducible approach for meningioma grading. Future research should validate this framework in real-world clinical settings and explore adding clinical parameters to enhance its prognostic value.