CDP-KDNet: Curriculum-Guided Dynamic Pruning and Knowledge Distillation for Resource-Efficient Ultrasound Elastography.
Authors
Affiliations (2)
Affiliations (2)
- School of Computer Science and Software Engineering, Southwest Petroleum University, Chengdu, China.
- Department of Biomedical Engineering, Michigan Technological University, Houghton, MI, USA.
Abstract
In recent years, convolutional neural network (CNN)-based optical flow models for motion estimation have been applied to radio-frequency (RF) ultrasound and B-mode (BM) data, demonstrating excellent performance. However, their architectures result in intricate network structures with a large number of parameters, posing challenges for deployment on resource-constrained devices. This paper proposes a novel approach that integrates dynamic pruning, knowledge distillation, and curriculum learning for model compression. The proposed method substantially reduces the complexity of deep learning models (i.e., memory demands and computational costs) while minimizing performance degradation. The teacher network was initially developed based on the Unsupervised Motion Estimation CNN (UMEN-Net). Subsequently, we developed a sub-network to reduce the number of parameters, referred to as DP-Net, and applied the proposed training techniques to obtain the final model, CDP-KDNet. The CDP-KDNet model was evaluated on simulated, phantom, and in vivo ultrasound data. Compared to DP-Net and other lightweight CNNs, CDP-KDNet achieves superior Signal-to-Noise Ratio (SNR) and Contrast-to-Noise Ratio (CNR) for axial strain estimation across all tested datasets. Its performance closely matches that of the teacher network while utilizing only 45.3% of the parameters and 67.8% of the floating-point operations. Additionally, as an unsupervised model, CDP-KDNet does not require ground-truth labels during training, rendering it a promising approach for ultrasound motion estimation.