Back to all papers

Deep learning and radiomics integration of photoacoustic/ultrasound imaging for non-invasive prediction of luminal and non-luminal breast cancer subtypes.

Authors

Wang M,Mo S,Li G,Zheng J,Wu H,Tian H,Chen J,Tang S,Chen Z,Xu J,Huang Z,Dong F

Affiliations (5)

  • Department of Ultrasound, Shenzhen People's Hospital, The Second Clinical Medical College, Jinan University, Shenzhen, Guangdong, China.
  • Ultrasound imaging system development department, Shenzhen Mindray Bio-Medical Electronics Co., Ltd, Shenzhen, China.
  • Department of Ultrasound, Shenzhen People's Hospital, The Second Clinical Medical College, Jinan University, Shenzhen, Guangdong, China. [email protected].
  • Department of Ultrasound, Shenzhen People's Hospital, The Second Clinical Medical College, Jinan University, Shenzhen, Guangdong, China. [email protected].
  • Department of Ultrasound, Shenzhen People's Hospital, The Second Clinical Medical College, Jinan University, Shenzhen, Guangdong, China. [email protected].

Abstract

This study aimed to develop a Deep Learning Radiomics integrated model (DLRN), which combines photoacoustic/ultrasound(PA/US)imaging with clinical and radiomics features to distinguish between luminal and non-luminal BC in a preoperative setting. A total of 388 BC patients were included, with 271 in the training group and 117 in the testing group. Radiomics and deep learning features were extracted from PA/US images using Pyradiomics and ResNet50, respectively. Feature selection was performed using independent sample t-tests, Pearson correlation analysis, and LASSO regression to build a Deep Learning Radiomics (DLR) model. Based on the results of univariate and multivariate logistic regression analyses, the DLR model was combined with valuable clinical features to construct the DLRN model. Model efficacy was assessed using AUC, accuracy, sensitivity, specificity, and NPV. The DLR model comprised 3 radiomic features and 6 deep learning features, which, when combined with significant clinical predictors, formed the DLRN model. In the testing set, the AUC of the DLRN model (0.924 [0.877-0.972]) was significantly higher than that of the DLR (AUC 0.847 [0.758-0.936], p = 0.026), DL (AUC 0.822 [0.725-0.919], p = 0.06), Rad (AUC 0.717 [0.597-0.838], p < 0.001), and clinical (AUC 0.820 [0.745-0.895], p = 0.002) models. These findings indicate that the DLRN model (integrated model) exhibited the most favorable predictive performance among all models evaluated. The DLRN model effectively integrates PA/US imaging with clinical data, showing potential for preoperative molecular subtype prediction and guiding personalized treatment strategies for BC patients.

Topics

Deep LearningPhotoacoustic TechniquesBreast NeoplasmsUltrasonography, MammaryJournal Article

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.