SAHDAI-XAI Subarachnoid Hemorrhage Detection Artificial Intelligence- eXplainable AI: Testing explainability in SAH Imaging Data and AI Modeling
Authors
Affiliations (1)
Affiliations (1)
- Departments of Neurological Surgery, Neurology and Critical Care , 4500 San Pablo Rd, Jacksonville, FL 32224
Abstract
IntroductionSubarachnoid hemorrhage (SAH) is a life-threatening and crucial neurological emergency. SAHDAI-XAI (Subarachnoid Hemorrhage Detection Artificial Intelligence) is a cloud-based machine learning model created as a binary positive and negative classifier to detect SAH bleeding seen in any of eight potential hemorrhage spaces. It aims to address the lack of transparency in AI- based detection of subarachnoid hemorrhage. MethodsThis project is divided into two phases, integrating Auto-ML and BLAST, combining the statistical assessment of hemorrhage detection accuracy using a low-code approach with the simultaneous colour-based visualization of bleeding areas to enhance transparency. In phase 1, an AutoML model was trained on Google Cloud Vertex AI after preprocessing. The Model completed four runs, progressively increasing the dataset size. The dataset is split into 80% for training, 10% for validation, and 10% for testing, with explainability (XRAI) applied to the testing images. We started with 20 non-contrast head CT images followed by 40, 200, and then 300 images, and in each AutoML run, the dataset was equivalently divided into one half manually labeled as positive for hemorrhage and the other half labeled as negative controls. The fourth AutoML evaluated the models ability to differentiate between a hemorrhage and other pathologies, such as tumors and calcifications. In phase 2, the goal is to increase explainability by visualizing predictive image features and showing the detection of hemorrhage locations using the Brain Lesion Analysis and Segmentation Tool for Computed Tomography (BLAST). This model segments and quantifies four different hemorrhage and edema locations. ResultsIn phase one, the first two AutoML runs demonstrated 100% average precision due to the small data size. In the third run, the average precision was 97.9% after increasing the dataset size, and one false negative (FN) image was detected. In the fourth round, after evaluating the models differentiation abilities, the average precision rate dropped to 94.4%. This round demonstrated two false positive (FP) images from the testing deck. After extensive preprocessing using the BLAST model public Python code in the second phase, topographic images of the bleeding were demonstrated with different outcomes. Some accurately cover a significant percentage of the bleeding, whereas others do not. ConclusionThe SAHDAI-XAI model is a new image-based SAH explainable AI model that shows enhanced transparency for AI hemorrhage detection in daily clinical life and aims to overcome AIs untransparent nature and accelerate time to diagnosis, thereby helping decrease the mortality rates.6 BLAST model utilization facilitates a better understanding of AI outcomes and supports the creation of visually demonstrated XAI in SAH detection and predicting hemorrhage coverage. The goal is to resolve AIs hidden black-box aspect, making ML model outcomes increasingly transparent and explainable. Keywords: SAH, explainable AI, GCP, AutoML, BLAST, black-box.