Interactive prototype learning and self-learning for few-shot medical image segmentation.

Authors

Song Y,Xu C,Wang B,Du X,Chen J,Zhang Y,Li S

Affiliations (4)

  • School of Computer Science and Technology, Anhui University, 230601, Hefei, China; Key Laboratory of Intelligent Computing and Signal Processing of Ministry of Education, Anhui University, 230601, Hefei, China.
  • School of Computer Science and Technology, Anhui University, 230601, Hefei, China; Key Laboratory of Intelligent Computing and Signal Processing of Ministry of Education, Anhui University, 230601, Hefei, China; Institute of Artificial Intelligence, Hefei Comprehensive National Science Center, 230051, Hefei, China. Electronic address: [email protected].
  • Department of Computer Science and Technology, Tsinghua University, 100084, Beijing, China. Electronic address: [email protected].
  • School of Engineering, Case Western Reserve University, 44106, Cleveland, United States.

Abstract

Few-shot learning alleviates the heavy dependence of medical image segmentation on large-scale labeled data, but it shows strong performance gaps when dealing with new tasks compared with traditional deep learning. Existing methods mainly learn the class knowledge of a few known (support) samples and extend it to unknown (query) samples. However, the large distribution differences between the support image and the query image lead to serious deviations in the transfer of class knowledge, which can be specifically summarized as two segmentation challenges: Intra-class inconsistency and Inter-class similarity, blurred and confused boundaries. In this paper, we propose a new interactive prototype learning and self-learning network to solve the above challenges. First, we propose a deep encoding-decoding module to learn the high-level features of the support and query images to build peak prototypes with the greatest semantic information and provide semantic guidance for segmentation. Then, we propose an interactive prototype learning module to improve intra-class feature consistency and reduce inter-class feature similarity by conducting mid-level features-based mean prototype interaction and high-level features-based peak prototype interaction. Last, we propose a query features-guided self-learning module to separate foreground and background at the feature level and combine low-level feature maps to complement boundary information. Our model achieves competitive segmentation performance on benchmark datasets and shows substantial improvement in generalization ability.

Topics

Journal Article

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.