Fixed point method for PET reconstruction with learned plug-and-play regularization.
Authors
Affiliations (1)
Affiliations (1)
- BioMaps, Université Paris-Saclay, CNRS, Inserm, SHFJ, CEA, 4 Place du général Leclerc, Orsay, Île-de-France, 91401, FRANCE.
Abstract

Deep learning has shown great promise for improving medical image reconstruction, including PET. However, concerns remain about the stability and robustness of these methods, especially when trained on limited data. This work aims to explore the use of the Plug-and-Play (PnP) framework in PET reconstruction to address these concerns.

Approach:
We propose a convergent PnP algorithm for low-count PET reconstruction based on the Douglas-Rachford splitting method. We consider several denoisers trained to satisfy fixed-point conditions, with convergence properties ensured either during training or by design, including a spectrally normalized network and a deep equilibrium model. We evaluate the bias-standard deviation tradeoff across clinically relevant regions and an unseen pathological case in a synthetic experiment and a real study. Comparisons are made with model-based iterative reconstruction, post-reconstruction denoising, a deep end-to-end unfolded network and PnP with a Gaussian denoiser.

Main Results:
Our method achieves lower bias than post-reconstruction processing and reduced standard deviation at matched bias compared to model-based iterative reconstruction. While spectral normalization underperforms in generalization, the deep equilibrium model remains competitive with convolutional networks for plug-and-play reconstruction and generalizes better to the unseen pathology. Compared to the end-to-end unfolded network, it also generalizes more consistently.

Significance:
This study demonstrates the potential of the PnP framework to improve image quality and quantification accuracy in PET reconstruction. It also highlights the importance of how convergence conditions are imposed on the denoising network to ensure robust and generalizable performance.