Uncovering ethical biases in publicly available fetal ultrasound datasets.
Authors
Affiliations (5)
Affiliations (5)
- Department of Information Engineering, Università Politecnica delle Marche, Ancona, Italy. [email protected].
- Department of Innovative Technologies in Medicine and Dentistry, Università degli Studi G. D'Annunzio Chieti - Pescara, Chieti, Italy.
- Department of Information Engineering, Università Politecnica delle Marche, Ancona, Italy.
- Department of Political Sciences, Communication, and International Relations, Università di Macerata, Macerata, Italy.
- Institute for Technology and Global Health, PathCheck Foundation, Cambridge, MA, USA.
Abstract
We explore biases present in publicly available fetal ultrasound (US) imaging datasets, currently at the disposal of researchers to train deep learning (DL) algorithms for prenatal diagnostics. As DL increasingly permeates the field of medical imaging, the urgency to critically evaluate the fairness of benchmark public datasets used to train them grows. Our thorough investigation reveals a multifaceted bias problem, encompassing issues such as lack of demographic representativeness, limited diversity in clinical conditions depicted, and variability in US technology used across datasets. We argue that these biases may significantly influence DL model performance, which may lead to inequities in healthcare outcomes. To address these challenges, we recommend a multilayered approach. This includes promoting practices that ensure data inclusivity, such as diversifying data sources and populations, and refining model strategies to better account for population variances. These steps will enhance the trustworthiness of DL algorithms in fetal US analysis.