3D FusionNet for synthetic CT based lung cancer segmentation.
Authors
Affiliations (3)
Affiliations (3)
- School of Computer Engineering, KIIT-Deemed to be University, Bhubaneswar, 751024, Odisha, India.
- Department of Computer Science and Engineering, Manipal University Jaipur, Jaipur, 303007, Rajasthan, India.
- Department of Computer Science and Engineering, Manipal University Jaipur, Jaipur, 303007, Rajasthan, India. [email protected].
Abstract
Interpretation of three-dimensional medical scans continues to be a demanding task in computer-assisted diagnostic systems. In this study we present a segmentation framework specifically designed for lung CT images. Since lung cancer is one of the most fatal cancers worldwide and the demand for better detection tools remains pressing. To address this issue our proposed work combined the Deep Convolutional Generative Adversarial Networks (DCGAN) with the 3D-TDUnet++ architecture. The use of DCGAN allowed us to generate additional CT samples. Which results in reducing the problem of insufficient annotated data and improving the robustness of the training set. In this experiment the Chest CT-Scan images Dataset is used which is publicly available in KAGGLE. Which is being further enriched with synthetic images created by DCGAN. While being trained on this augmented dataset the proposed 3D-FusionNet model achieved superior performance in cancer detection. Which shows higher accuracy, sensitivity and specificity than other conventional approaches. The integration of DCGAN with 3D-TDUnet++ together with Non-Local Feature Aggregation (NLFa) made the system promising for clinical environments where limited data often restricts the application of deep learning models. Quantitative evaluation reveals that the proposed 3D-FusionNet achieves a Dice coefficient of 88.94%, F1-score of 88.94% and an accuracy of 93.37%. Outperforming the benchmark models such as DenseNet, ResNet and MDDNet-ASPP. This hybrid design leverages the strengths of generative augmentation and non-local attention to improve both segmentation precision and clinical robustness. Thus the proposed 3D-FusionNet model demonstrates how deep generative modeling and volumetric segmentation can be integrated into a cohesive machine vision system. Which is capable of scalable and efficient deployment in real-world diagnostic pipelines.