Application of multi-scale feature extraction and explainable machine learning in chest x-ray position evaluation within an integrated learning framework.
Authors
Affiliations (5)
Affiliations (5)
- Department of Radiology, Xijing Hospital, Fourth Military Medical University, Xi'an, China.
- State Key Laboratory of Human-Machine Hybrid Augmented Intelligence, and Institute of Artificial Intelligence and Robotics, Xi'an Jiaotong University, Xi'an, China.
- State Key Laboratory of Human-Machine Hybrid Augmented Intelligence, and Institute of Artificial Intelligence and Robotics, Xi'an Jiaotong University, Xi'an, China. [email protected].
- Department of Ultrasound, the Second Affiliated Hospital of Xi'an Jiaotong University, Xi'an Jiaotong University, Xi'an, China. [email protected].
- Department of Radiology, Xijing Hospital, Fourth Military Medical University, Xi'an, China. [email protected].
Abstract
This study presents a novel deep learning-machine learning fusion network for quantitative and interpretable assessment of chest X-ray positioning, aiming to analyze critical factors in patient positioning layout. In this retrospective study, we analyzed 3300 chest radiographs from a Chinese medical institution, collected between March 2021-December 2022. The dataset was partitioned into the XJ_chest_21 subset for training automated segmentation model and the XJ_chest_22 subset to validate three classification models: Random Forest Fusion Network (RFFN), Threshold Classification (TC), and Multivariate Logistic Regression (MLR). After automatically measuring five positioning indicators in the images, the data were input into the models to assess positioning quality. We compared the performance metrics of the three classification models, including AUC, accuracy, sensitivity, and specificity. SHAP (Shapley Additive Explanations) was utilized to interpret feature importance in the decision-making process of the RFFN model. We evaluated measurement consistency between the Automated Measurement Model (AMM) and radiologists. U-net++ demonstrated significantly superior performance compared to U-net in multi-target segmentation accuracy (mean Dice: 0.926 vs. 0.812). The five positioning metrics showed excellent agreement between AMM and reference standards (r = 0.93). ROC analysis indicated that RFFN performed significantly better in overall image quality classification (AUC, 0.982; 95% CI: 0.963, 0.993) compared to both TC (AUC, 0.959; 95% CI: 0.923, 0.995) and MLR (AUC, 0.953; 95% CI: 0.933, 0.974). Our study introduces a novel segmentation-based random forest fusion network that achieves accurate image positioning classification and identifies critical operational factors. Furthermore, the clinical interpretability of the fusion model was enhanced through the application of the SHAP method. Question How can AI-driven interpretable methods be utilized to assess patient positioning in chest radiography and enhance radiographers' accuracy? Findings The Random Forest Fusion Network (RFFN) outperformed Threshold Classification (TC) and Multivariate Logistic Regression (MLR) in positioning classification (AUC = 0.98). Clinical relevance An integrated framework that combines deep learning and machine learning achieves accurate image positioning classification, identifies critical operational factors, enables expert-level image quality assessment, and delivers automated feedback to radiographers.