Back to all papers

Transformer-based and CNN-based models for clinically effective 2D and 3D pelvic bone segmentation in CT imaging.

December 26, 2025pubmed logopapers

Authors

Nesheli SJ,Sabet M,Koozari A,Mirzaghavami P,Eftekhar A,Elhaie M,Rouhi S,Lariche NJ,Abidi M,Rezaeijo SM

Affiliations (10)

  • Faculty of Engineering, University of Science and Culture, Tehran, Iran.
  • Department of Computer Engineering, School of Engineering, Fasa University, Fasa, Iran.
  • Department of Medical Physics, Faculty of Medicine, Ahvaz Jundishapur University of Medical Sciences, Ahvaz, Iran.
  • Department of Radiology, School of paramedical Sciences, Zanjan University of Medical Sciences, Zanjan, Iran.
  • Department of Medical Physics, School of Medicine Isfahan University of Medical Sciences, Isfahan, Iran.
  • Department of Medical Physics, Faculty of Medicine, Kermanshah University of Medical Sciences, Kermanshah, Iran.
  • Student Research Committee, Ahvaz Jundishapur University of Medical Sciences, Ahvaz, Iran.
  • Noncommunicable Diseases Research Center, Fasa University of Medical Sciences, Fasa, Iran.
  • Department of Medical Physics, Faculty of Medicine, Ahvaz Jundishapur University of Medical Sciences, Ahvaz, Iran. [email protected].
  • Sorena Programming and Artificial Intelligence Academy, Technical and Vocational Training Organization, Ahvaz, Iran. [email protected].

Abstract

Accurate segmentation of pelvic structures and fracture fragments in trauma CT scans remains a technical challenge due to anatomical complexity and image variability. This study provides the first systematic comparison of CNN (U-Net, LinkNet) and transformer-based (UNETR) architectures in 2D and 3D formats for automated pelvic bone segmentation on the PENGWIN MICCAI 2024 challenge dataset, evaluating encoder backbones (VGG19, ResNet50) to identify optimal strategies for clinical trauma imaging. CT data from 150 patients with pelvic fractures were obtained from the PENGWIN MICCAI 2024 challenge. Preprocessing included normalization, resampling, one-hot encoding of segmentation masks, and format conversion. The segmentation task focused on classifying sacrum, left hipbone, and right hipbone, consolidating original 30-class annotations into 4 classes, providing novel benchmarks for multi-fragment pelvic fracture analysis using off-the-shelf architectures adapted to clinical trauma challenges. 2D models used a 192 × 192 resolution, and 3D models used volumetric inputs of 128 × 128 × 128 or 128 × 128 × 16. Models were trained using composite loss functions (Dice + Focal or BCE), evaluated with 5-fold cross-validation. U-Net and LinkNet were implemented with VGG19 and ResNet50 backbones; UNETR used a transformer-based encoder. Model performance was assessed via Dice coefficient, IoU, accuracy, sensitivity, and specificity. Inference speed and cross-model comparisons were conducted to analyze computational efficiency and segmentation quality. U-Net with ResNet50 achieved the highest 2D performance (Dice 0.991, IoU 0.982), while 3D U-Net with VGG19 led volumetric segmentation (Dice 0.9112). UNETR offered superior specificity (0.993) and inference consistency (< 1 min/case), albeit with lower sensitivity (0.730) in complex fragment localization. Visual analyses confirmed high anatomical fidelity. Comparison with existing literature showed our models exceeded or matched top benchmarks in Dice scores and generalizability. Our results confirm the efficacy of deep learning-based segmentation for pelvic fracture analysis, with high accuracy and computational efficiency supporting potential clinical integration. The models presented here demonstrate high accuracy and computational efficiency, offering a strong foundation for potential clinical integration into automated trauma imaging systems, pending external validation and workflow assessment.

Topics

Journal Article

Ready to Sharpen Your Edge?

Subscribe to join 7,600+ peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.