Deep Learning Model Enables 3D Handheld Photoacoustic-Ultrasound Imaging Without Sensors

July 15, 2025

Pusan National University develops MoGLo-Net, an AI model that reconstructs 3D images from handheld 2D photoacoustic and ultrasound scans without external sensors.

Key Details

  • MoGLo-Net uses deep learning to track handheld ultrasound transducer motion from tissue speckle data, eliminating need for external tracking hardware.
  • Combines ResNet-based encoder and LSTM-based motion estimator for accurate motion tracking and 3D reconstruction.
  • Validated using both proprietary and public datasets, outperforming state-of-the-art methods on all metrics.
  • Successfully achieved 3D blood vessel reconstructions from combined ultrasound and photoacoustic data.
  • Published June 13, 2025, in IEEE Transactions on Medical Imaging (DOI: 10.1109/TMI.2025.3579454).
  • Innovation aims to make advanced 3D imaging safer, more accurate, and accessible without costly hardware.

Why It Matters

MoGLo-Net's ability to reconstruct 3D volumes from standard handheld imaging without external sensors could democratize high-end ultrasound and photoacoustic imaging, especially important in resource-limited settings. This advance may enable more effective diagnosis and real-time imaging guidance, expanding access to advanced imaging technologies while reducing costs and complexity of equipment.

Read more

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.