Back to all papers

Construction and clinical validation of a fetal brain magnetic resonance imaging-prediction model based on multimodal AI fusion algorithm.

December 10, 2025pubmed logopapers

Authors

Liu B,Zhang F,Guo J,Lu W,Zhu Z,Liu Y,Yin C

Affiliations (2)

  • Department of Radiology, Shenzhen Maternity and Child Healthcare Hospital, Women and Children's Medical Center, Southern Medical University, Shenzhen, Guangdong Province, China. Electronic address: [email protected].
  • Department of Radiology, Shenzhen Maternity and Child Healthcare Hospital, Women and Children's Medical Center, Southern Medical University, Shenzhen, Guangdong Province, China.

Abstract

To develop a multimodal artificial intelligence (AI) fusion model predicting abnormal fetal brain development from magnetic resonance imaging (MRI). Using fetal brain MRI data and clinical indicators from pregnant women (from January 2021 to December 2023), who were split 7:3 into training and validation sets. In the training set, key predictors were identified via univariate analysis and multivariate logistic regression, including both clinical indicators and continuous MRI biometric parameters. Three multimodal AI fusion models,including Convolutional Neural Network-Recurrent Neural Network (CNN-RNN) model, attention mechanism-based model, and feature concatenation model were developed. Performance was assessed by accuracy, precision, recall, F1-score, and the area under the receiver operating characteristic curve (AUC). Among the total 806 participants, 108 cases (19.15 %) had fetal brain abnormalities in the training set (n = 564), 45 cases (18.59 %) in the validation set (n = 242). Multivariate logistic regression analysis showed that gestational age, gestational diabetes mellitus, alpha-fetoprotein, lateral ventricular width, and sulcation development score were independent risk factors for fetal brain abnormalities. The attention mechanism fusion model achieved the highest AUC in both the training set (0.876) and the validation set (0.869), significantly outperforming the CNN-RNN fusion model (AUC in training set: 0.776; AUC in validation set: 0.718) and the feature concatenation fusion model (AUC in training set: 0.754; AUC in validation set: 0.720). The multimodal AI fusion model, particularly using attention mechanisms, effectively identifies high-risk fetal brain abnormalities, offering potential for early clinical intervention and improved prenatal counseling to enhance detection and prognosis of neurological disorders.

Topics

Journal Article

Ready to Sharpen Your Edge?

Subscribe to join 7,100+ peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.