Back to all news

Deep Learning Model Enables 3D Handheld Photoacoustic-Ultrasound Imaging Without Sensors

EurekAlertResearch
Deep Learning Model Enables 3D Handheld Photoacoustic-Ultrasound Imaging Without Sensors

Pusan National University develops MoGLo-Net, an AI model that reconstructs 3D images from handheld 2D photoacoustic and ultrasound scans without external sensors.

Key Details

  • 1MoGLo-Net uses deep learning to track handheld ultrasound transducer motion from tissue speckle data, eliminating need for external tracking hardware.
  • 2Combines ResNet-based encoder and LSTM-based motion estimator for accurate motion tracking and 3D reconstruction.
  • 3Validated using both proprietary and public datasets, outperforming state-of-the-art methods on all metrics.
  • 4Successfully achieved 3D blood vessel reconstructions from combined ultrasound and photoacoustic data.
  • 5Published June 13, 2025, in IEEE Transactions on Medical Imaging (DOI: 10.1109/TMI.2025.3579454).
  • 6Innovation aims to make advanced 3D imaging safer, more accurate, and accessible without costly hardware.

Why It Matters

MoGLo-Net's ability to reconstruct 3D volumes from standard handheld imaging without external sensors could democratize high-end ultrasound and photoacoustic imaging, especially important in resource-limited settings. This advance may enable more effective diagnosis and real-time imaging guidance, expanding access to advanced imaging technologies while reducing costs and complexity of equipment.

Ready to Sharpen Your Edge?

Subscribe to join 7,600+ peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.