Auto-segmentation of cerebral cavernous malformations using a convolutional neural network.

Authors

Chou CJ,Yang HC,Lee CC,Jiang ZH,Chen CJ,Wu HM,Lin CF,Lai IC,Peng SJ

Affiliations (9)

  • Division of Neurosurgery, Department of Surgery, Kaohsiung Veterans General Hospital, Kaohsiung, Taiwan.
  • School of Medicine, College of Medicine, National Yang Ming Chiao Tung University, Taipei, Taiwan.
  • Department of Neurosurgery, Neurological Institute, Taipei Veterans General Hospital, Taipei, Taiwan.
  • Department of Electrical Engineering, National Central University, Taoyuan, Taiwan.
  • University of Texas Health Science Center at Houston, Houston, TX, USA.
  • Department of Radiology, Taipei Veterans General Hospital, Taipei, Taiwan.
  • Department of Heavy Particles & Radiation Oncology, Taipei Veterans General Hospital, Taipei, Taiwan.
  • In-Service Master Program in Artificial Intelligence in Medicine, College of Medicine, Taipei Medical University, No.250, Wuxing St., Xinyi Dist, Taipei, 110, Taiwan. [email protected].
  • Clinical Big Data Research Center, Taipei Medical University Hospital, Taipei Medical University, Taipei, Taiwan. [email protected].

Abstract

This paper presents a deep learning model for the automated segmentation of cerebral cavernous malformations (CCMs). The model was trained using treatment planning data from 199 Gamma Knife (GK) exams, comprising 171 cases with a single CCM and 28 cases with multiple CCMs. The training data included initial MRI images with target CCM regions manually annotated by neurosurgeons. For the extraction of data related to the brain parenchyma, we employed a mask region-based convolutional neural network (Mask R-CNN). Subsequently, this data was processed using a 3D convolutional neural network known as DeepMedic. The efficacy of the brain parenchyma extraction model was demonstrated via five-fold cross-validation, resulting in an average Dice similarity coefficient of 0.956 ± 0.002. The segmentation models used for CCMs achieved average Dice similarity coefficients of 0.741 ± 0.028 based solely on T2W images. The Dice similarity coefficients for the segmentation of CCMs types were as follows: Zabramski Classification type I (0.743), type II (0.742), and type III (0.740). We also developed a user-friendly graphical user interface to facilitate the use of these models in clinical analysis. This paper presents a deep learning model for the automated segmentation of CCMs, demonstrating sufficient performance across various Zabramski classifications. not applicable.

Topics

Hemangioma, Cavernous, Central Nervous SystemMagnetic Resonance ImagingNeural Networks, ComputerDeep LearningJournal Article
Get Started

Upload your X-ray image and get interpretation.

Upload now →

Disclaimer: X-ray Interpreter's AI-generated results are for informational purposes only and not a substitute for professional medical advice. Always consult a healthcare professional for medical diagnosis and treatment.