A Cost-Efficient Multi-Angle Fusion Deep Learning for Ultrasound Localization Microscopy.
Authors
Abstract
Ultrasound localization microscopy (ULM) enables super-resolution imaging of microvascular structures by localizing microbubbles from clutter-filtered ultrafast ultrasound data. However, conventional clutter filtering methods, particularly those based on singular value decomposition, are computationally intensive and thus impractical for real-time applications. In this study, we introduce AF-UNet, a lightweight multi-angle deep learning framework designed to accelerate clutter filtering in ULM. The model processes spatiotemporal slices from rotated 3D in-phase/quadrature data and fuses them to suppress tissue signals and reconstruct microvascular volumes. AF-UNet demonstrates robust performance across diverse anatomical organs, including brain, eye, and kidney, achieving strong generalization with consistently high image fidelity. Systematic analysis reveals optimal angular acquisition settings that enhance fusion performance, with peak improvements observed at 2$^\circ$-3$^\circ$ separations for ocular datasets and slightly larger angles for rat kidney and brain datasets. AF-UNet achieves over 20-fold computational speedup compared to conventional SVD filtering while preserving microvascular details, offering a practical pathway toward real-time, clinically applicable ULM.