Back to all papers

Deep-Motion-Net: GNN-based volumetric liver shape reconstruction from single-view 2D projections.

May 13, 2026pubmed logopapers

Authors

Wijesinghe I,Nix M,Zakeri A,Hokmabadi A,Al-Qaisieh B,Gooya A,Taylor Z

Affiliations (6)

  • Centre for Computational Imaging and Simulation Technologies in Biomedicine, School of Mechanical Engineering, University of Leeds, Leeds, UK. [email protected].
  • Department of Medical Physics and Engineering, St James's University Hospital, Leeds Teaching Hospitals NHS Trust, Leeds, UK.
  • Division of Informatics, Imaging, and Data Sciences, Faculty of Biology, Medicine and Health, University of Manchester, Manchester, UK.
  • School of Medicine and Population Health, University of Sheffield, Sheffield, UK.
  • School of Computing Science, University of Glasgow, Glasgow, UK.
  • Centre for Computational Imaging and Simulation Technologies in Biomedicine, School of Mechanical Engineering, University of Leeds, Leeds, UK.

Abstract

Internal anatomical motion challenges precise radiation delivery during external beam radiotherapy. Estimating and compensating for anatomical motion are essential for improving planned dose delivery to target volumes while sparing organs-at-risk. This research achieves accurate motion prediction using only planar X-ray imaging from conventional linear accelerators, without surrogate signals or invasive fiducial markers. We propose Deep-Motion-Net: a patient-specific end-to-end graph neural network (GNN) enabling 3D volumetric organ reconstruction from single in-treatment kV planar X-ray images at arbitrary projection angles. A 2D convolutional neural network (CNN) encoder extracts image features, which four feature pooling networks fuse to a 3D template organ mesh. A ResNet-based graph attention network then deforms the feature-encoded mesh. Training uses synthetically generated organ motion instances and corresponding kV images, created by deforming a reference CT volume aligned with the template mesh, generating digitally reconstructed radiographs (DRRs) at required angles, and applying DRR-to-kV style transfer via conditional CycleGAN. Quantitative testing on synthetic respiratory motion scenarios and qualitative assessment on in-treatment images from four liver cancer patients demonstrated overall mean prediction errors of 0.16 ± 0.13 mm, 0.18 ± 0.19 mm, 0.22 ± 0.34 mm, and 0.12 ± 0.11 mm across datasets. Mean peak prediction errors were 1.39 mm, 1.99 mm, 3.29 mm, and 1.16 mm. This approach leverages accessible in-treatment imaging, avoiding expensive MRI systems or invasive markers. To the best of our knowledge, this is the first deep learning framework reconstructing volumetric 3D organ models from single-view images at arbitrary angles throughout an entire in-treatment scan series. Our approach achieves sub-millimetre accuracy when validated on synthetic motion instances and demonstrates clinical feasibility on real-treatment kV images, for which volumetric ground truth is inherently unavailable. The code is available at https://github.com/isurusuranga/DeepMotionNet .

Topics

Journal Article

Ready to Sharpen Your Edge?

Subscribe to join 11k+ peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.