Automated Microbubble Discrimination in Ultrasound Localization Microscopy by Vision Transformer.
Authors
Abstract
Ultrasound localization microscopy (ULM) has revolutionized microvascular imaging by breaking the acoustic diffraction limit. However, different ULM workflows depend heavily on distinct prior knowledge, such as the impulse response and empirical selection of parameters (e.g., the number of microbubbles (MBs) per frame M), or the consistency of training-test dataset in deep learning (DL)-based studies. We hereby propose a general ULM pipeline that reduces priors. Our approach leverages a DL model that simultaneously distills microbubble signals and reduces speckle from every frame without estimating the impulse response and M. Our method features an efficient channel attention vision transformer (ViT) and a progressive learning strategy, enabling it to learn global information through training on progressively increasing patch sizes. Ample synthetic data were generated using the k-Wave toolbox to simulate various MB patterns, thus overcoming the deficiency of labeled data. The ViT output was further processed by a standard radial symmetry method for sub-pixel localization. Our method performed well on model-unseen public datasets: one in silico dataset with ground truth and four in vivo datasets of mouse tumor, rat brain, rat brain bolus, and rat kidney. Our pipeline outperformed conventional ULM, achieving higher positive predictive values (precision in DL, 0.88-0.41 vs. 0.83-0.16) and improved accuracy (root-mean-square errors: 0.25-0.14 λ vs. 0.31-0.13 λ) across a range of signal-to-noise ratios from 60 dB to 10 dB. Our model could detect more vessels in diverse in vivo datasets while achieving comparable resolutions to the standard method. The proposed ViT-based model, seamlessly integrated with state-of-the-art downstream ULM steps, improved the overall ULM performance with no priors.