Back to all papers

3D Freehand Ultrasound Reconstruction of the Carotid Artery.

May 4, 2026pubmed logopapers

Authors

Dou Y,Kiernan MJ,Zhang Z,Mitchell C,Possell A,Lee M,Varghese T

Affiliations (6)

  • Department of Electrical and Computer Engineering, University of Wisconsin-Madison, Madison, WI, USA; Department of Medical Physics, University of Wisconsin School of Medicine and Public Health, Madison, WI, USA. Electronic address: [email protected].
  • Department of Electrical and Computer Engineering, University of Wisconsin-Madison, Madison, WI, USA.
  • Department of Electrical and Computer Engineering, University of Wisconsin-Madison, Madison, WI, USA; Department of Medical Physics, University of Wisconsin School of Medicine and Public Health, Madison, WI, USA.
  • Department of Medical Physics, University of Wisconsin School of Medicine and Public Health, Madison, WI, USA; Department of Medicine, University of Wisconsin School of Medicine and Public Health, Madison, WI, USA.
  • Department of Medical Physics, University of Wisconsin School of Medicine and Public Health, Madison, WI, USA.
  • Department of Radiology, University of Wisconsin School of Medicine and Public Health, Madison, WI, USA.

Abstract

Sensorless alignment of two-dimensional (2D) freehand ultrasound scans for three-dimensional US (3DUS) reconstruction offers significant advantages due to its ease of use. Prior approaches have used transducers with motion sensors, which are cumbersome and inconvenient in a clinical setting, linear wobblers, or motorized 2D scanning which suffer from a small field of view (FOV) and low volume acquisition rates. Freehand transverse B-mode data loops from 20 human volunteers (10 males, 10 females) were used for 3DUS reconstruction Our two-stream Physics inspired Learning-based Prediction of Pose Information (PLPPI) model explicitly integrates and utilizes speckle decorrelation as an inductive bias (temporal information) along with spatial information for alignment using 2D convolutions. A correlation layer then synergizes spatiotemporal cues for freehand frame alignment. A residual neural network (ResNet) predicted the spatial location of the input frames. PLPPI outperformed baseline deep learning networks (DLN), i.e. 2D CNN, ConvLSTM, and DC<sup>2</sup>-Net, with a 13% improvement in global pixel reconstruction error, 59.36% improvement in final drift, and 35.74% in final drift rate over the next best DLN, while requiring significantly less Graphics Processing Unit (GPU) memory. Our model has fewer parameters, requiring less GPU memory to train for freehand 3DUS reconstruction along with a major reduction in computation time (106% speedup and 131% reduction in GPU memory usage) compared to baseline DLN.

Topics

Journal Article

Ready to Sharpen Your Edge?

Subscribe to join 11k+ peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.