Reducing bulky medical images via shape-texture decoupled deep neural networks.
Authors
Affiliations (3)
Affiliations (3)
- Department of Automation, Tsinghua University, Beijing, China.
- Department of Automation, Tsinghua University, Beijing, China. [email protected].
- Institute of Brain and Cognitive Sciences, Tsinghua University, Beijing, China. [email protected].
Abstract
The explosive growth of medical data poses significant challenges for storage and sharing. Current compression techniques utilizing Implicit Neural Representations (INRs) effectively strike a balance between encoding accuracy and compression ratio, yet they suffer from slow encoding speeds. By contrast, data-driven compressors encode fast but heavily rely on the training data and cannot generalize well. To develop a practical compression tool overcoming all these limitations, we introduce Shape-Texture Decoupled Compression (DeepSTD), which focuses on the data set of the same modality and body parts and proposes decoupling the variations into shape and texture components for separate encoding. Disentangling two components facilitates designing proper encoding strategies suitable for their respective characteristics-swift shape encoding based on INRs and effective data-driven texture encoding. The proposed approach combines the advantages of INR-based and data-driven models, to achieve high fidelity, fast encoding speed, as well as good generalizability. Comprehensive evaluations on large-scale Computed Tomography (CT) and Magnetic Resonance Imaging (MRI) datasets demonstrate superior performance across encoding quality, compression ratio, and speed. Besides, with features like parallel acceleration with multiple Graphics Processing Units (multi-GPU), flexible control of compression ratio, and broad applicability, DeepSTD offers a robust and efficient solution for the pressing demands of modern medical data compression.