ScatterFusionNet: physics-informed deep scatter correction for dual-detector CT using Klein-Nishina prior.
Authors
Affiliations (1)
Affiliations (1)
- Department of Engineering Physics, Tsinghua University, No. 30 Shuangqing Road, Tsinghua University, Haidian District, Dept. of Engineering Physics, Beijing, 100084, China.
Abstract
Scatter artifacts degrade cone-beam CT image quality, yet acquiring ground-truth scatter-free data in clinical settings requires time-consuming measurements. Purely data-driven deep learning methods exhibit limited generalization across anatomical regions, often learning anatomy-specific shortcuts rather than fundamental scattering physics. We aim to develop a physics-informed scatter correction framework that enables robust cross-anatomy generalization without extensive site-specific training data.

Approach: We propose ScatterFusionNet, a physics-informed neural network that incorporates Klein--Nishina scattering priors to embed angular scattering constraints into the learning process. The network fuses side-detector measurements from dual-detector CT with a multi-scale backbone via Feature-wise Linear Modulation (FiLM), where Klein--Nishina prior maps guide feature modulation in a physically grounded manner. The model is trained on Monte Carlo simulations and fine-tuned using a single right-ear dataset.

Main results: When evaluated on unseen right-teeth and left-teeth datasets, the proposed method achieves 5.7\% and 3.6\% CNR improvements, respectively, closely matching beam stop array ground truth. Under identical training protocols, a classical SE UNet baseline shows only marginal gains (0.8\% and 1.0\%), indicating substantially weaker cross-anatomy generalization.

Significance: These results demonstrate that embedding physics-informed priors into deep networks is critical for building robust scatter correction systems. By integrating Klein--Nishina constraints with dual-detector measurements, the proposed framework enhances generalizability across anatomical sites while reducing dependence on extensive anatomy-specific training data.