Back to all papers

3D deep-learning radiomics from MR-T2WI for predicting placenta accreta spectrum disorders: A multicenter study.

February 16, 2026pubmed logopapers

Authors

Zhang X,Guo C,Guo S,Zhu X,Yu N,Han D,Huang X,Li Y

Affiliations (7)

  • Department of Medical Techniques, Shaanxi University of Chinese Medicine, Xianyang, China.
  • Department of Radiology, The Second Affiliated Hospital of Shaanxi University of Chinese Medicine, Xianyang, China.
  • The First School of Clinical Medicine of Lanzhou University, Lanzhou, China.
  • Department of Radiology, The First Hospital of Lanzhou University, Lanzhou, China.
  • Department of Radiology, The Affiliated Hospital of Shaanxi University of Chinese Medicine, Xianyang, China.
  • Department of Radiology, Yan'an University Affiliated Hospital, Yan'an, China.
  • Department of Radiology, First People's Hospital of Shangqiu, Shangqiu, China.

Abstract

To develop a three-dimensional (3D) deep-learning radiomics from magnetic resonance-T2-weighted imaging (T2WI) for predicting the risk of placenta accreta spectrum (PAS) disorders. This study conducted a retrospective multicenter involving 601 suspected PAS cases. Center A contributed 476 cases, while Centers B and C provided 63/62 cases. The Center A and Center B (totaling 539 cases) served as the data sources for model training and internal validation, and were divided into a training set (377 cases) and a validation set (162 cases) using the stratified random sampling method. The 62 cases from Center C were designated as the independent external test set. All patients underwent magnetic resonance imaging (MRI), with sagittal T2WI acquired. Clinical features predictive of PAS were identified using univariate and multivariate logistic regression analyses. Radiologist diagnosis was performed by radiologists of varying levels of experience. Radiomics features were extracted from the T2WI scans. Volumetric features were derived from the entire 3D regions of interest on T2WI images. Deep features were subsequently extracted using a pretrained 3D ResNet50/DenseNet121/ShuffleNet model, enhanced via transfer learning. Key predictive features were selected using correlation filtering and Lasso regression, culminating in a 3D radiomics signature. Model performance was evaluated on the independent validation cohort using receiver operating characteristic (ROC) curve analysis, with area under the curve (AUC) comparisons. The 3D deep learning (DL3D) model (single-modality) achieved the highest performance across all datasets (train AUC = 0.912; validation AUC = 0.864; test AUC = 0.817), significantly outperforming traditional radiomics (AUC range: 0.742-0.837) and clinical models (AUC range: 0.826-0.841). Integration of DL3D, radiomics, and clinical features into a combined multimodal model further enhanced predictive accuracy, yielding AUCs of 0.927 (train), 0.867 (validation), and 0.847 (test). Notably, both junior and senior radiologists demonstrated substantially lower diagnostic accuracy (AUC range: 0.586-0.623) compared to all models. The standalone DL3D model significantly surpassed expert assessments, with the combined model delivering the most pronounced performance advantage over human interpretation.

Topics

Journal Article

Ready to Sharpen Your Edge?

Subscribe to join 11k+ peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.