A network-assisted joint image and motion estimation approach for robust 3D MRI motion correction across severity levels.
Authors
Affiliations (5)
Affiliations (5)
- Krembil Brain Institute, University Health Network, Toronto, Ontario, Canada.
- Department of Medical Biophysics, University of Toronto, Toronto, Ontario, Canada.
- Toronto Neuroimaging Facility, Department of Psychology, University of Toronto, Toronto, Ontario, Canada.
- Physical Sciences, Sunnybrook Research Institute, Toronto, Ontario, Canada.
- Center for Neuroscience Imaging Research, Institute for Basic Science and Department of Biomedical Engineering, Sungkyunkwan University, Suwon, Republic of Korea.
Abstract
The purpose of this work was to develop and evaluate a novel method that leverages neural networks and physical modeling for 3D motion correction at different levels of corruption. The novel method ("UNet+JE") combines an existing neural network ("UNet<sub>mag</sub>") with a physics-informed algorithm for jointly estimating motion parameters and the motion-compensated image ("JE"). UNet<sub>mag</sub> and UNet+JE were trained on two training datasets separately with different distributions of motion corruption severity and compared to JE as a benchmark. All five resulting methods were tested on T<sub>1</sub>w 3D MPRAGE scans of healthy participants with simulated (n = 40) and in vivo (n = 10) motion corruption ranging from mild to severe motion. UNet+JE provided better motion correction than UNet<sub>mag</sub> ( <math xmlns="http://www.w3.org/1998/Math/MathML"> <semantics><mrow><mi>p</mi> <mo><</mo> <msup><mn>10</mn> <mrow><mo>-</mo> <mn>2</mn></mrow> </msup> </mrow> <annotation>$$ p<{10}^{-2} $$</annotation></semantics> </math> for all metrics for both simulated and in vivo data), under both training datasets. UNet<sub>mag</sub> exhibited residual image artifacts and blurring, as well as greater susceptibility to data distribution shifts than UNet+JE. UNet+JE and JE did not significantly differ in image correction quality ( <math xmlns="http://www.w3.org/1998/Math/MathML"> <semantics><mrow><mi>p</mi> <mo>></mo> <mn>0.05</mn></mrow> <annotation>$$ p>0.05 $$</annotation></semantics> </math> for all metrics), even under strong distribution shifts for UNet+JE. However, UNet+JE reduced runtimes by a median reduction factor of between 2.00 to 3.80 as well as 4.05 for the simulation and in vivo studies, respectively. UNet+JE benefitted from the robustness of joint estimation and the fast image improvement provided by the neural network, enabling the method to provide high quality 3D image correction under a wide range of motion corruption within shorter runtimes.