Deep Learning Application of YOLOv8 for Aortic Dissection Screening using Non-contrast Computed Tomography.
Authors
Affiliations (7)
Affiliations (7)
- Department of Interventional and Vascular Surgery, The Third Affiliated Hospital of Nanjing Medical University (Changzhou Second People's Hospital), Changzhou, China.
- Department of Interventional Radiology, First People's Hospital of Changzhou (Affiliated Hospital of Soochow University), Changzhou, China.
- Department of Oncology and Vascular Intervention, Gaochun People's Hospital, Nanjing, China.
- Department of Interventional Radiology, Wujin Hospital, Jiangsu University, Changzhou, China.
- Department of Radiology, The Third Affiliated Hospital of Nanjing Medical University (Changzhou Second People's Hospital), Changzhou, China. Electronic address: [email protected].
- Department of Interventional Radiology, Huaian Hospital of Huai'an City (Huaian Cancer Hospital), Huai'an, China. Electronic address: [email protected].
- Department of Interventional and Vascular Surgery, The Third Affiliated Hospital of Nanjing Medical University (Changzhou Second People's Hospital), Changzhou, China. Electronic address: [email protected].
Abstract
Acute aortic dissection (AD) is a life threatening condition that poses considerable challenges for timely diagnosis. Non-contrast computed tomography (CT) is frequently used to diagnose AD in certain clinical settings, but its diagnostic accuracy can vary among radiologists. This study aimed to develop and validate an interpretable YOLOv8 deep learning model based on non-contrast CT to detect AD. This retrospective study included patients from five institutions, divided into training, internal validation, and external validation cohorts. The YOLOv8 deep learning model was trained on annotated non-contrast CT images. Its performance was evaluated using area under the curve (AUC), sensitivity, specificity, and inference time compared with findings from vascular interventional radiologists, general radiologists, and radiology residents. In addition, gradient weighted class activation mapping (Grad-CAM) saliency map analysis was performed. A total of 1 138 CT scans were assessed (569 with AD, 569 controls). The YOLOv8s model achieved an AUC of 0.964 (95% confidence interval [CI] 0.939 - 0.988) in the internal validation cohort and 0.970 (95% CI 0.946 - 0.990) in the external validation cohort. In the external validation cohort, the performance of the three groups of radiologists in detecting AD was inferior to that of the YOLOv8s model. The model's sensitivity (0.976) was slightly higher than that of vascular interventional specialists (0.965; p = .18), and its specificity (0.935) was superior to that of general radiologists (0.835; p < .001). The model's inference time was 3.47 seconds, statistically significantly shorter than the radiologists' mean interpretation time of 25.32 seconds (p < .001). Grad-CAM analysis confirmed that the model focused on anatomically and clinically relevant regions, supporting its interpretability. The YOLOv8s deep learning model reliably detected AD on non-contrast CT and outperformed radiologists, particularly in time efficiency and diagnostic accuracy. Its implementation could enhance AD screening in specific settings, support clinical decision making, and improve diagnostic quality.