A Deep Learning Framework for Enhanced Ovarian Adnexal Mass Classification Using Routinely Acquired Ultrasound Images.
Authors
Affiliations (5)
Affiliations (5)
- Department of Radiology, Mayo Clinic, Rochester, MN, USA.
- Department of Obstetrics and Gynecology, Mayo Clinic, Rochester, MN, USA.
- Division of Gynecology, European Institute of Oncology, IRCCS, Milan, Italy.
- Department of Obstetrics and Gynecology, University of Insubria, Varese, Italy.
- Department of Radiology, Mayo Clinic, Rochester, MN, USA. [email protected].
Abstract
Accurate classification of ovarian masses is crucial for clinical decision-making. B-mode ultrasound is widely used for imaging adnexal masses, yet complex structures can be difficult to differentiate. We propose a deep learning framework integrating radiomics and mass substructure analysis to enhance diagnostic accuracy. A retrospective cohort of 230 patients with adnexal masses that were imaged via routine ultrasound and subsequently underwent surgery (or had at least 10Â months of follow-up) were included in this study. Our deep learning model first automatically segments adnexal masses and separately distinguishes fluid and solid components. Next, our multi-modal classification network differentiates benign from malignant adnexal masses. Additionally, we developed an explainability method that enhances clinical interpretability by identifying the top training samples that are the most similar to test cases using feature embedding similarity. Our framework achieved 90% accuracy and 94% AUC at the image level, and 91% accuracy and 92% AUC at the patient level, demonstrating improved classification performance compared to ADNEX (77% accuracy, 92% AUC), O-RADS 2019 (84% accuracy, 89% AUC), and O-RADS 2022 (80% accuracy, 88% AUC). This approach will enable exploring clinical applications where AI assistance can provide malignancy predictions along with visualizations of similar historical cases to enhance decision-making.