Pixel-level Radiomics and Deep Learning for Predicting Ki-67 Expression in Breast Cancer Based on Dual-modal Ultrasound Images.
Authors
Affiliations (6)
Affiliations (6)
- Department of Ultrasound, the First Affiliated Hospital of Anhui Medical University, Hefei, Anhui 230022, China (W.W., D.Z., W.Z., Y.G., W.L., C.X.Z.); Department of Ultrasound, the First Affiliated Hospital of Wannan Medical College (Yijishan Hospital), Wuhu, Anhui 241000, China (W.W., H.J.F.).
- Department of Ultrasound, WuHu Hospital, East China Normal University (The Second People's Hospital, WuHu), Wuhu, Anhui 241001, China (F.X.).
- Department of Ultrasound, the First Affiliated Hospital of Anhui Medical University, Hefei, Anhui 230022, China (W.W., D.Z., W.Z., Y.G., W.L., C.X.Z.).
- Department of Ultrasound, The Affiliated Hospital of Xuzhou Medical University, Xuzhou, Jiangsu 221000, China (X.J.W.).
- Department of Ultrasound, the First Affiliated Hospital of Wannan Medical College (Yijishan Hospital), Wuhu, Anhui 241000, China (W.W., H.J.F.).
- Department of Ultrasound, the First Affiliated Hospital of Anhui Medical University, Hefei, Anhui 230022, China (W.W., D.Z., W.Z., Y.G., W.L., C.X.Z.). Electronic address: [email protected].
Abstract
This study aimed to develop a deep learning model using a novel pixel-level radiomics approach based on two-dimensional (2D) and strain elastography (SE) ultrasound images to predict Ki-67 expression in breast cancer (BC). This multicenter study included 1031 BC patients, who were divided into training (n = 616), internal validation (n = 265), and external test (n = 150) cohorts. An additional 63 patients were prospectively enrolled for further validation. The deep learning model, termed Vision-Mamba, predicts Ki67 expression by integrating ultrasound (2D and SE) images with pixel-level radiomics feature maps (RFMs). A combined model was subsequently constructed by incorporating independent clinical predictors. Model performance was assessed using receiver operating characteristic (ROC) curves, calibration curves, and decision curve analysis (DCA). SHapley Additive exPlanations (SHAP) were applied to enhance interpretability. We developed a Vision-Mamba-US-RFMs-Clinical (V-MURC) model that integrates ultrasound images, RFMs, and clinical data for accurate prediction of Ki-67 expression in BC. The area under the ROC curve (AUC) values for the internal validation, external test, and prospective validation cohorts were 0.954 (95% CI, 0.929 - 0.975), 0.941 (95% CI, 0.903 - 0.975), and 0.945 (95% CI, 0.883 - 0.989), respectively, demonstrating excellent discrimination and calibration. Compared with individual models, the V-MURC model achieved significantly superior performance across all datasets (Delong test, P < 0.05). Calibration curves and DCA further supported its clinical applicability. SHAP analysis provided visual interpretability of the model's decision-making process. The V-MURC model based on pixel-level RFMs can accurately predict Ki-67 expression in BC and may serve as a valuable tool for individualized treatment decision-making in clinical practice.