Fully automated IVUS image segmentation with efficient deep-learning-assisted annotation.
Authors
Affiliations (3)
Affiliations (3)
- Iowa Institute for Biomedical Imaging, The University of Iowa, USA; Department of Electrical and Computer Engineering, The University of Iowa, USA.
- Department of Medicine/Cardiovascular Division, University of Virginia, USA; Beirne B. Carter Center for Immunology Research, University of Virginia, USA.
- Iowa Institute for Biomedical Imaging, The University of Iowa, USA; Department of Electrical and Computer Engineering, The University of Iowa, USA. Electronic address: [email protected].
Abstract
Intravascular ultrasound (IVUS) image segmentation plays a critical role in the diagnosis, treatment planning, and monitoring of coronary artery disease. Although deep learning (DL) methods have achieved state-of-the-art (SOTA) results in various medical image segmentation tasks, effectively delivering clinically acceptable results remains challenging due to the limited availability of large annotated datasets. In this paper, we report an efficient deep learning framework for fully automated IVUS image segmentation that combines active learning and interaction of model outputs to dramatically reduce annotation effort both in image selection and annotation querying from human experts. We propose a two-branch network that integrates a spatial and channel-wise probability attention module into the segmentation network to segment lumen and plaque areas and simultaneously predict potential segmentation errors. With the introduction of segmentation quality assessment (SQA), we can quantify the quality of achieved segmentation on unannotated images and provide meaningful visual cues for human experts, assisting them to concentrate on the most relevant image samples, judiciously determine the most 'valuable' images for annotation and effectively employ adjudicated segmentations as the next-batch training annotations. The model performance is thus incrementally boosted via fine-tuning on the newly annotated datasets. We have evaluated our methods on a set of coronary IVUS data from 266 subjects and 38,771 cross-sectional frames by 5-fold cross-validation, demonstrating that our approach achieves SOTA segmentation performance using no more than 10% of training data and significantly reduces the annotation effort.