GlioMODA: Robust Glioma Segmentation in Clinical Routine
Authors
Affiliations (1)
Affiliations (1)
- Department of Neuroradiology, TUM School of Medicine, TUM University Hospital rechts der Isar, Technical University of Munich, Munich, Germany
Abstract
BackgroundPrecise glioma segmentation in MRI is essential for accurate diagnosis, optimal treatment planning, and advancing clinical research. However, most deep learning approaches require complete, standardized MRI protocols that are frequently unavailable in routine clinical practice. This study presents and evaluates GlioMODA, a robust deep learning framework designed for automated glioma segmentation that delivers consistent high performance across varied and incomplete MRI protocols. MethodsGlioMODA was trained and validated on the BraTS 2021 dataset (1,251 training, 219 testing cases), systematically assessing performance across eleven clinically relevant MRI protocol combinations. Segmentation accuracy was evaluated using Dice similarity coefficients (DSC) and panoptic quality metrics. Volumetric accuracy was benchmarked against manual ground truth, and statistical significance was established via Wilcoxon signed-rank tests with Benjamini-Yekutieli correction. ResultsGlioMODA demonstrated state-of-the-art segmentation accuracy across tumor subregions, maintaining robust performance with incomplete or heterogeneous MRI protocols. Protocols including both T1-weighted contrast-enhanced and T2-FLAIR sequences yielded volumetric differences versus manual ground truth that were not statistically significant for enhancing tumor (ET: median difference 55 mm3, p = 0.157) and whole tumor (WT: median difference -7 mm3, p = 1.0), and exhibited median DSC differences close to zero relative to the four-sequence reference protocol. Omitting either sequence led to substantial and significant volumetric errors. ConclusionsGlioMODA facilitates reliable, automated glioma segmentation using a streamlined two-sequence protocol (T1-contrast + T2-FLAIR), supporting clinical workflow optimization and broader implementation of quantitative volumetry compatible with RANO 2.0 criteria. GlioMODA is published as an open-source, easy-to-use Python package at https://github.com/BrainLesion/GlioMODA/. Key PointsO_LIT1-CE + T2-FLAIR maintains enhancing and whole tumor segmentation comparable to four-sequence MRI. C_LIO_LIConsistent volumes with T1-CE + T2-FLAIR support reliable RANO 2.0 assessment. C_LIO_LIOpen-source GlioMODA (models + code) supports rapid integration. C_LI Importance of the StudyAutomated glioma segmentation is limited in practice by incomplete or heterogeneous MRI protocols. GlioMODA directly addresses this barrier by delivering consistent accuracy across 11 clinically relevant sequence combinations and identifying a streamlined protocol (T1-contrast and T2-FLAIR) whose enhancing- and whole-tumor volumes are not statistically different from expert reference. This enables shorter scans and reproducible volumetry compatible with RANO 2.0, facilitating reliable response assessment in trials and routine care. By releasing trained models and code as an easy-to-use open-source package, this work enables external validation and integration into neuro-oncology workflows.