Dynamic abdominal MRI image generation using cGANs: A generalized model for various breathing patterns with extensive evaluation.
Authors
Affiliations (4)
Affiliations (4)
- Robotics and Mechatronics group, Technical Medicine Centre, University of Twente, Enschede, The Netherlands. Electronic address: [email protected].
- Robotics and Mechatronics group, Technical Medicine Centre, University of Twente, Enschede, The Netherlands; Faculty of Engineering, Electrical and Electronics Engineering, Ankara University, Ankara, Turkey.
- Robotics and Mechatronics group, Technical Medicine Centre, University of Twente, Enschede, The Netherlands.
- Robotics and Mechatronics group, Technical Medicine Centre, University of Twente, Enschede, The Netherlands; Department of Medical Imaging, Radboud University Medical Center, Nijmegen, The Netherlands.
Abstract
Organ motion is a limiting factor during the treatment of abdominal tumors. During abdominal interventions, medical images are acquired to provide guidance, however, this increases operative time and radiation exposure. In this paper, conditional generative adversarial networks are implemented to generate dynamic magnetic resonance images using external abdominal motion as a surrogate signal. The generator was trained to account for breathing variability, and different models were investigated to improve motion quality. Additionally, an objective and subjective study were conducted to assess image and motion quality. The objective study included different metrics, such as structural similarity index measure (SSIM) and mean absolute error (MAE). In the subjective study, 32 clinical experts participated in evaluating the generated images by completing different tasks. The tasks involved identifying images and videos as real or fake, via a questionnaire allowing experts to assess the realism in static images and dynamic sequences. The results of the best-performing model displayed an SSIM of 0.73 ± 0.13, and the MAE was below 4.5 and 1.8 mm for the superior-inferior and anterior-posterior directions of motion. The proposed framework was compared to a related method that utilized a set of convolutional neural networks combined with recurrent layers. In the subjective study, more than 50% of the generated images and dynamic sequences were classified as real, except for one task. Synthetic images have the potential to reduce the need for acquiring intraoperative images, decreasing time and radiation exposure. A video summary can be found in the supplementary material.