Radiomics-Based AI Model to Assist Clinicians in Intracranial Hemorrhage Diagnosis: External Validation Study.
Authors
Affiliations (8)
Affiliations (8)
- Department of Radiology, Faculty of Medicine, Chiang Mai University, Chiang Mai, Thailand.
- Global Health and Chronic Conditions Research Group, Chiang Mai University, Chiang Mai, Thailand.
- Master's Degree Program in Data Science, Faculty of Engineering, Chiang Mai University, Chiang Mai, Thailand.
- Department of Radiology, Ramathibodi Hospital, Mahidol University, Bangkok, Thailand.
- Department of Family Medicine, Faculty of Medicine, Chiang Mai University, Chiang Mai, Thailand.
- Department of Statistics, Faculty of Science, Chiang Mai University, Chiang Mai, Thailand.
- Data Science Research Center, Faculty of Science, Chiang Mai University, 239 Huay Kaew Road, Muang District, Chiang Mai, 50200, Thailand, 66 903327931.
- Department of Computer Science, Faculty of Science, Chiang Mai University, Chiang Mai, Thailand.
Abstract
Early identification of the etiology of spontaneous intracerebral hemorrhage (ICH) could significantly contribute to planning a suitable treatment strategy. A notable radiomics-based artificial intelligence (AI) model for classifying causes of spontaneous ICH from brain computed tomography scans has been previously proposed. This study aimed to externally validate and assess the utility of this AI model. This study used 69 computed tomography scans from a separate cohort to evaluate the AI model's performance in classifying nontraumatic ICHs into primary, tumorous, and vascular malformation related. We also assessed the accuracy, sensitivity, specificity, and positive predictive value of clinicians, radiologists, and trainees in identifying the ICH causes before and after using the model's assistance. The performances were statistically analyzed by specialty and expertise levels. The AI model achieved an overall accuracy of 0.65 in classifying the 3 causes of ICH. The model's assistance improved overall diagnostic performance, narrowing the gap between nonradiology and radiology groups, as well as between trainees and experts. The accuracy increased from 0.68 to 0.72, from 0.72 to 0.76, from 0.69 to 0.74, and from 0.72 to 0.75 for nonradiologists, radiologists, trainees, and specialists, respectively. With the model's support, radiology professionals demonstrated the highest accuracy, highlighting the model's potential to enhance diagnostic consistency across different levels. When applied to an external dataset, the accuracy of the AI model in categorizing spontaneous ICHs based on radiomics decreased. However, using the model as an assistant substantially improved the performance of all reader groups, including trainees and radiology and nonradiology specialists.