A Novel Dual-Modality Dual-View Hybrid Deep Learning-Machine Learning Framework for the Prediction of Carotid Plaque Vulnerability via Late Fusion.
Authors
Affiliations (3)
Affiliations (3)
- Department of Biomedical Engineering, The Hong Kong Polytechnic University, Hong Kong SAR, China.
- Department of Ultrasound, Beijing Tiantan Hospital, Capital Medical University, Beijing 100069, China.
- Research Institute for Smart Ageing, The Hong Kong Polytechnic University, Hong Kong SAR, China.
Abstract
<b>Background</b>: Ultrasound imaging is an ideal tool for regular carotid plaque screening to identify individuals at high risk of stroke for clinical intervention. However, no existing study leverages multi-modal multi-view ultrasound imaging for AI-enabled auto-classification of carotid plaque vulnerability. This study aims to develop and validate an effective AI model for carotid plaque vulnerability classification through the applications of dual-modal (B-Mode and contrast-enhanced mode) dual-view (longitudinal and cross-sectional) settings to maximize the utility and potential of ultrasound imaging. <b>Methods</b>: Hybrid deep-learning (DL) and machine-learning (ML) methods were employed to balance between model discriminability and interpretability. B-Mode ultrasound (BMUS) and contrast-enhanced ultrasound (CEUS) images from 241 patients were retrospectively analyzed using the proposed hybrid-DL-ML variants. <b>Results</b>: Our findings suggest the hybrid VGG-RF model developed from a dual-modal dual-view setting outperforms those developed from other settings for identifying vulnerable carotid plaques. The VGG-RF model emerged as the best-performing model, achieving an optimal performance with an AUC of 0.908, precision of 0.765, recall of 0.929, specificity of 0.886, and F1 score of 0.839. The inherent interpretability of the VGG-RF model divulged that long-axis views of BMUS and CEUS images were the major contributing features for discriminating vulnerable carotid plaques against their counterparts. <b>Conclusions</b>: The present study underscored the effectiveness of AI models developed from dual-modal dual-view settings of ultrasound images. Notably, the hybrid VGG-RF model was benchmarked as the best-performing model among other studied hybrid DL-ML variants. Further studies on a larger cohort in a prospective setting are warranted to validate the findings of the current study.