Incremental 2D self-labelling for effective 3D medical volume segmentation with minimal annotations.
Authors
Affiliations (5)
Affiliations (5)
- School of Computing, Newcastle University, 1, Urban Sciences Building, Science Square, 1 Science Square, Newcastle Upon Tyne, NE4 5TG, UK.
- Sunderland Eye Infirmary, National Health Service, Queen Alexandra Rd, Sunderland, SR2 9HP, UK.
- Biosciences Institute, Newcastle University, Catherine Cookson Building, Newcastle Upon Tyne, NE2 4HH, UK.
- School of Computing, Newcastle University, 1, Urban Sciences Building, Science Square, 1 Science Square, Newcastle Upon Tyne, NE4 5TG, UK. [email protected].
- Biosciences Institute, Newcastle University, Catherine Cookson Building, Newcastle Upon Tyne, NE2 4HH, UK. [email protected].
Abstract
The development and application of deep learning-based models have seen significant success in medical image segmentation, transforming diagnostic and treatment processes. However, these advancements often rely on large, fully annotated datasets, which are challenging to obtain due to the labour-intensive and costly nature of expert annotation. Therefore, we sought to explore the feasibility and efficacy of training 2D models under severe annotation constraints, aiming to optimise segmentation performance while minimising annotation costs. We propose an incremental 2D self-labelling framework for segmenting 3D medical volumes from a single annotated slice per volume. A 2D U-Net is first trained on these central slices. The model then iteratively generates and filters pseudo-labels for adjacent slices, progressively fine-tuning itself on an expanding dataset. This process is repeated until the entire training set is pseudo-labelled to produce the final model. On brain MRI and liver CECT datasets, our self-labelling approach improved segmentation performance compared to using only the sparse ground-truth data, increasing the Dice Similarity Coefficient and Intersection over Union by up to 15.95% and 26.75%, respectively. It also improved 3D continuity, reducing the 95th percentile Hausdorff Distance from 69.88 mm to 36.46 mm. Parameter analysis revealed that a gradual propagation of high-confidence pseudo-labels was most effective. Our framework demonstrates that a computationally efficient 2D model can be leveraged through self-labelling to achieve robust 3D segmentation performance and coherence from extremely sparse annotations, offering a viable solution to reduce the annotation burden in medical imaging.