Robust Deep Learning for Pulse-echo Speed of Sound Imaging via Time-shift Maps.
Authors
Abstract
Accurately imaging the spatial distribution of longitudinal speed of sound (SoS) has a profound impact on image quality and the diagnostic value of ultrasound. Knowledge of SoS distribution allows effective aberration correction to improve image quality. SoS imaging also provides a new contrast mechanism to facilitate disease diagnosis. However, SoS imaging is challenging in the pulse-echo mode. Deep learning (DL) is a promising approach for pulse-echo SoS imaging, which may yield more accurate results than pure physics-based approaches. Herein, we developed a robust DL approach for SoS imaging that learns the nonlinear mapping between measured time shifts and the underlying SoS without subjecting to the constraints of a specific forward model. Various strategies were adopted to enhance model performance. Time-shift maps were computed by adopting a common mid-angle configuration from the non-DL literature, normalizing complex beamformed ultrasound data, and accounting for depth-dependent frequency when converting phase shifts to time shifts. The structural similarity index measure (SSIM) was incorporated into the loss function to learn the global structure for SoS imaging. A two-stage training strategy was employed, leveraging computationally efficient ray-tracing synthesis for extensive pretraining, and more realistic but computationally expensive full-wave simulations for fine-tuning. Using these combined strategies, our model was shown to be robust and generalizable across different conditions. The simulation-trained model successfully reconstructed the SoS maps of phantoms using experimental data. Compared with the physics-based inversion approach, our method improved reconstruction accuracy and contrast-to-noise ratio in phantom experiments. These results demonstrated the accuracy and robustness of our approach.