Active guidance in ultrasound bladder scanning using reinforcement learning.
Authors
Affiliations (3)
Affiliations (3)
- Department of Computer Science, Duke University, Durham, NC, USA.
- Philips North America, Cambridge, MA, USA. [email protected].
- Philips North America, Cambridge, MA, USA.
Abstract
Accurate measurement of bladder volume is essential for diagnosing urinary retention and voiding dysfunction. However, finding optimal view can be challenging for less experienced operators, potentially leading to suboptimal imaging and potential misdiagnoses. This study proposes an intelligent guidance system leveraging reinforcement learning (RL) to improve the acquisition of ultrasound images in ultrasound bladder scanning procedure. We introduce a novel pipeline that incorporates a practical variant of Deep Q-Networks (DQN), known as Adam LMCDQN, which is theoretically validated within linear Markov Decision Processes. Our system aims to offer real-time, adaptive feedback to operators, improving image quality and consistency. We also present a novel domain-specific reward design for reinforcement learning (RL), incorporating domain knowledge to enhance performance. Our results demonstrate a promising [Formula: see text] success rate in reaching target points along the transverse direction and [Formula: see text] along the longitudinal direction, significantly outperforming supervised deep learning models, which achieved [Formula: see text] and [Formula: see text], respectively. This work is among the first to apply RL in ultrasound guidance for bladder assessment, demonstrating the technical feasibility of optimal-view localization in a simulated environment and exploring exploration strategies and reward formulations relevant to the guidance task.