KC-UNIT: Multi-kernel conversion using unpaired image-to-image translation with perceptual guidance in chest computed tomography imaging.

Authors

Choi C,Kim D,Park S,Lee H,Kim H,Lee SM,Kim N

Affiliations (6)

  • Department of Biomedical Engineering, BK21 Project, Asan Medical Center, University of Ulsan College of Medicine, Seoul, Republic of Korea; Department of Convergence Medicine, Asan Medical Center, University of Ulsan College of Medicine, Seoul, Republic of Korea.
  • Department of Biomedical Engineering, Asan Medical Center, University of Ulsan College of Medicine, Seoul, Republic of Korea; Department of Convergence Medicine, Asan Medical Center, University of Ulsan College of Medicine, Seoul, Republic of Korea.
  • School of Medicine, CHA University, Seoul, Republic of Korea.
  • Department of Convergence Medicine, Asan Medical Center, University of Ulsan College of Medicine, Seoul, Republic of Korea.
  • Department of Radiology and Research Institute of Radiology, Asan Medical Center, University of Ulsan College of Medicine, Seoul, Republic of Korea.
  • Department of Biomedical Engineering, BK21 Project, Asan Medical Center, University of Ulsan College of Medicine, Seoul, Republic of Korea; Department of Radiology and Research Institute of Radiology, Asan Medical Center, University of Ulsan College of Medicine, Seoul, Republic of Korea. Electronic address: [email protected].

Abstract

Computed tomography (CT) images are reconstructed from raw datasets including sinogram using various convolution kernels through back projection. Kernels are typically chosen depending on the anatomical structure being imaged and the specific purpose of the scan, balancing the trade-off between image sharpness and pixel noise. Generally, a sinogram requires large storage capacity, and storage space is often limited in clinical settings. Thus, CT images are generally reconstructed with only one specific kernel in clinical settings, and the sinogram is typically discarded after a week. Therefore, many researchers have proposed deep learning-based image-to-image translation methods for CT kernel conversion. However, transferring the style of the target kernel while preserving anatomical structure remains challenging, particularly when translating CT images from a source domain to a target domain in an unpaired manner, which is often encountered in real-world settings. Thus, we propose a novel kernel conversion method using unpaired image-to-image translation (KC-UNIT). This approach utilizes discriminator regularization, using feature maps from the generator to improve semantic representation learning. To capture content and style features, cosine similarity content and contrastive style losses were defined between the feature map of generator and semantic label map of discriminator. This can be easily incorporated by modifying the discriminator's architecture without requiring any additional learnable or pre-trained networks. The KC-UNIT demonstrated the ability to preserve fine-grained anatomical structure from the source domain during transfer. Our method outperformed existing generative adversarial network-based methods across most kernel conversion methods in three kernel domains. The code is available at https://github.com/cychoi97/KC-UNIT.

Topics

Journal Article

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.