From digital chest tomosynthesis to 3D CT.
Authors
Affiliations (5)
Affiliations (5)
- Department of Biomedical Engineering and Medical Physics, Sahlgrenska University Hospital, Region Västra Götaland, SE-413 45, Gothenburg, Sweden.
- Department of Medical Radiation Sciences, Institute of Clinical Sciences, Sahgrenska Academy, University of Gothenburg, SE-413 45, Gothenburg, Sweden.
- Department of Applied IT, University of Gothenburg, SE-412 96, Gothenburg, Sweden.
- Department of Radiology, Institute of Clinical Sciences, Sahlgrenska Academy, University of Gothenburg, SE-413 45, Gothenburg, Sweden.
- Department of Radiology, Sahlgrenska University Hospital, Region Västra Götaland, SE-413 45, Gothenburg, Sweden.
Abstract
Digital chest tomosynthesis refers to the 3D reconstruction of low-dose projection images acquired within a limited angular range. The reconstructions have lower depth resolution and are more prone to motion artifacts compared to computed tomography (CT). While recent deep learning approaches aim to reconstruct full-resolution CT volumes from projections, they are computationally demanding due to the high resolution and inherently 3D nature of the task. In this study, we propose a more efficient alternative. Our deep learning-based framework reconstructs sagittal CT slices from small patches of projection data, significantly lowering memory demands. Rather than predicting continuous Houndsfield unit (HU) values, we segment voxels into air, soft tissue, or bone classes. Our results show that the method captures coarse structural features and depth information with high consistency, but struggles to reconstruct fine details. While not yet suitable for clinical deployment, the approach highlights a promising direction for low-resource tomosynthesis-based volumetric imaging.