SMART: Self-supervised Learning for Metal Artifact Reduction in Computed Tomography Using Range Null Space Decomposition.
Authors
Abstract
Metal artifacts in computed tomography (CT) imaging significantly hinder diagnostic accuracy and clinical decision-making. While deep learning-based metal artifact reduction (MAR) methods have demonstrated promising progress, their clinical application is still constrained by three major challenges: (1) balancing metal artifact reduction with the preservation of critical anatomical structures, (2) effectively capturing the clinical priors of metal artifacts, and (3) dynamically adapting to polychromatic spectral variations. To address these limitations, in this paper, we propose a Self-supervised MAR method for computed Tomography (SMART) that leverages range-null space decomposition (RND) to model metal and tissue LACs separately, and employs implicit neural representation (INR) to learn their respective clinical characteristics without explicit supervision. Specifically, RND decouples metal and tissue LACs into a residual range component for metal LAC modeling, which captures metal artifacts, thus facilitating metal artifact reduction, and a null component for tissue LAC modeling, which focuses on preserving tissue details. To deal with the lack of paired data in clinical settings, we utilize INR to learn the clinical characteristics of these components in a self-supervised manner. Furthermore, SMART incorporates polychromatic spectra into the implicit representation, allowing dynamic adaptation to spectral variations across different imaging conditions. Extensive experiments on one synthetic and two clinical datasets demonstrate the strong potential of SMART in real-world scenarios. By flexibly adapting to spectral variations, it achieves superior generalizability to out-of-distribution clinical data.