Back to all papers

Application of multimodal integration to develop preoperative diagnostic models for borderline and malignant ovarian tumors.

October 23, 2025pubmed logopapers

Authors

Kunishima A,Inaba D,Iyoshi S,Ikeda Y,Goto M,Muramatsu R,Hashimoto M,Yoshida K,Mogi K,Yoshihara M,Nagao Y,Tamauchi S,Yokoi A,Yoshikawa N,Niimi K,Koizumi N,Kajiyama H

Affiliations (6)

  • Department of Obstetrics and Gynecology, Nagoya University Graduate School of Medicine, 65 Tsurumai-cho, Showa-ku, Nagoya-shi, 466-8550, Aichi, Japan.
  • Department of Mechanical and Intelligent Systems Engineering, Graduate School of Informatics and Engineering, The University of Electro-Communications, 1-5-1 Chofugaoka, Chofu-shi, 182-8585, Tokyo, Japan.
  • Department of Obstetrics and Gynecology, Nagoya University Graduate School of Medicine, 65 Tsurumai-cho, Showa-ku, Nagoya-shi, 466-8550, Aichi, Japan. [email protected].
  • Institute for Advanced Research, Nagoya University, Furo-cho, Chikusa-ku, Nagoya, 464-8601, Japan. [email protected].
  • Department of Obstetrics and Gynecology, Kasugai Municipal Hospital, 1-1-1 Takaki- cho, Kasugai-shi, 486-8510, Aichi, Japan.
  • Nagoya University School of Medicine, 65 Tsurumai-cho, Showa-ku, Nagoya-shi, 466-8550, Aichi, Japan.

Abstract

Malignant ovarian tumors (MOTs) and borderline ovarian tumors (BOTs) differ in treatment strategies and prognosis. However, accurate preoperative diagnosis remains challenging, and improving diagnostic accuracy is crucial. We developed and validated a system using artificial intelligence (AI) to integrate machine learning (ML) models based on blood test data and deep learning (DL) models based on magnetic resonance imaging (MRI) findings to distinguish between MOT and BOT. We analyzed 78 patients with malignant serous ovarian tumors and 31 with borderline serous ovarian tumors treated at our institution. A classification model was developed using ML for blood test data, and a DL model was constructed using MRI data. By integrating these models, we developed three fusion models as multimodal diagnostic AI and compared them with standalone models. The performance was evaluated using precision, recall, and accuracy. The classification model using Light Gradient Boosting Machine achieved an accuracy of 0.825, and the DL model using Visual Geometry Group 16-layer network achieved an accuracy of 0.722 for discriminating BOT from MOT. The intermediate, late, and dense fusion models achieved accuracies of 0.809, 0.776, and 0.825, respectively. Integrating multimodal information such as blood test and imaging data may enhance learning efficiency and improve diagnostic accuracy.

Topics

Ovarian NeoplasmsJournal Article

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.