Back to all papers

SeruNet-MS: A Two-Stage Interpretable Framework for Multiple Sclerosis Risk Prediction with SHAP-Based Explainability.

Authors

Aksoy S,Demircioglu P,Bogrekci I

Affiliations (2)

  • Institute of Computer Science, Ludwig Maximilian University of Munich (LMU), Oettingenstrasse 67, 80538 Munich, Germany.
  • Department of Mechanical Engineering, Aydin Adnan Menderes University (ADU), Aytepe, Aydin 09010, Türkiye.

Abstract

<b>Background/Objectives:</b> Multiple sclerosis (MS) is a chronic demyelinating disease where early identification of patients at risk of conversion from clinically isolated syndrome (CIS) to clinically definite MS remains a critical unmet clinical need. Existing machine learning approaches often lack interpretability, limiting clinical trust and adoption. The objective of this research was to develop a novel two-stage machine learning framework with comprehensive explainability to predict CIS-to-MS conversion while addressing demographic bias and interpretability limitations. <b>Methods:</b> A cohort of 177 CIS patients from the National Institute of Neurology and Neurosurgery in Mexico City was analyzed using SeruNet-MS, a two-stage framework that separates demographic baseline risk from clinical risk modification. Stage 1 applied logistic regression to demographic features, while Stage 2 incorporated 25 clinical and symptom features, including MRI lesions, cerebrospinal fluid biomarkers, electrophysiological tests, and symptom characteristics. Patient-level interpretability was achieved through SHAP (SHapley Additive exPlanations) analysis, providing transparent attribution of each factor's contribution to risk assessment. <b>Results:</b> The two-stage model achieved a ROC-AUC of 0.909, accuracy of 0.806, precision of 0.842, and recall of 0.800, outperforming baseline machine learning methods. Cross-validation confirmed stable performance (0.838 ± 0.095 AUC) with appropriate generalization. SHAP analysis identified periventricular lesions, oligoclonal bands, and symptom complexity as the strongest predictors, with clinical examples illustrating transparent patient-specific risk communication. <b>Conclusions:</b> The two-stage approach effectively mitigates demographic bias by separating non-modifiable factors from actionable clinical findings. SHAP explanations provide clinicians with clear, individualized insights into prediction drivers, enhancing trust and supporting decision making. This framework demonstrates that high predictive performance can be achieved without sacrificing interpretability, representing a significant step forward for explainable AI in MS risk stratification and real-world clinical adoption.

Topics

Journal Article

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.