A CT-based multimodal fusion model for predicting outcomes in blunt chest trauma: A multicenter study.
Authors
Affiliations (6)
Affiliations (6)
- The Department of Radiology, Tianjin University Tianjin Hospital, 406 Jiefang Southern Road, Tianjin, China; Graduate School, Tianjin University, Tianjin, China.
- Department of Radiology, Tianjin Medical University of General Hospital, China.
- The Department of Radiology, Tianjin University Tianjin Hospital, 406 Jiefang Southern Road, Tianjin, China.
- Department of Radiology, the Affiliated Hospital of Qingdao University, China.
- The Department of Radiology, Tianjin University Tianjin Hospital, 406 Jiefang Southern Road, Tianjin, China; Graduate School, Tianjin University, Tianjin, China. Electronic address: [email protected].
- The Department of Radiology, Tianjin University Tianjin Hospital, 406 Jiefang Southern Road, Tianjin, China; Graduate School, Tianjin University, Tianjin, China. Electronic address: [email protected].
Abstract
This study aimed to develop a multimodal predictive model that integrates clinical variables, radiomic features (RFs), and deep learning-based features (DLFs) to improve prognostic accuracy in patients with blunt chest trauma (BCT). We retrospectively analyzed 337 patients with BCT from three medical centers. Clinical and CT imaging data, including emergency and follow-up scans, were obtained. Features including radiomic (RF) and deep learning-based (DLF) descriptors along with delta features representing temporal changes were extracted. After sequential feature selection, the least absolute shrinkage and selection operator (LASSO) regression was applied to identify optimal features. Model development included clinical-only, imaging-only, and fused models, with performance evaluated using AUC, calibration curves, and decision curve analysis. Rib fracture count, multiple injuries, and hemopneumothorax-to-lung ratio (HPR) were identified as independent prognostic factors. The fusion model, particularly the delta-clinical-DLR model, achieved AUCs of 0.85 (95Â % CI: 0.80-0.90) and 0.86 (95Â % CI: 0.77-0.95) in the training and test sets, respectively. Significant improvements in net reclassification (NRI up to 0.76) and integrated discrimination (IDI up to 0.26) were observed compared to the clinical models alone. Integrating multi-timepoint CT imaging with clinical variables through a multimodal fusion model significantly enhances the prognostic performance of BCT, providing a robust tool for individualized risk prediction and clinical decision-making.