Artificial Intelligence-Assisted Image Extraction in Neonatal Echocardiography for Congenital Heart Disease Diagnosis in Sub-Saharan Africa: Protocol for Model Development.
Authors
Affiliations (8)
Affiliations (8)
- Digital Technology and Innovation Hub, Health Research Foundation Buea, Buea, Cameroon.
- Division of Paediatric Cardiology, Department of Paediatrics and Child Health, University of Cape Town, Cape Town, South Africa.
- School of Biomedical Engineering and Imaging Sciences, King's College London, London, United Kingdom.
- School of Computing, Ulster University, Belfast, United Kingdom.
- St. Elizabeth Catholic general hospital Shisong Cardiac centre, Kumbo, Cameroon.
- Department of Computing, South Kensington Campus, Imperial College London, London, United Kingdom.
- School of Medicine, Ulster University, Londonderry, United Kingdom.
- South African Medical Research Council, Cape Town, South Africa.
Abstract
Sub-Saharan Africa (SSA) bears the highest global burden of under-5 mortality, with congenital heart disease (CHD) as a major contributor. Despite advancements in high-income countries, CHD-related mortality in SSA remains largely unchanged due to limited diagnostic capacity and centralized health care. While pulse oximetry aids early detection, confirmation typically relies on echocardiography, a procedure constrained by a shortage of specialized personnel. Artificial intelligence (AI) offers a promising solution to bridge this diagnostic gap. This study aims to develop an AI-assisted echocardiography system that enables nonexpert operators, such as nurses, midwives, and medical doctors, to perform basic cardiac ultrasound sweeps on neonates suspected of CHD and extract accurate cardiac images for remote interpretation by a pediatric cardiologist. The study will use a 2-phase approach to develop a deep learning model for real-time cardiac view detection in neonatal echocardiography, utilizing data from St. Padre Pio Hospital in Cameroon and the Red Cross War Memorial Children's Hospital in South Africa to ensure demographic diversity. In phase 1, the model will be pretrained on retrospective data from nearly 500 neonates (0-28 days old). Phase 2 will fine-tune the model using prospective data from 1000 neonates, which include background elements absent in the retrospective dataset, enabling adaptation to local clinical environments. The datasets will consist of short and continuous echocardiographic video clips covering 10 standard cardiac views, as defined by the American Society of Echocardiography. The model architecture will leverage convolutional neural networks and convolutional long short-term memory layers, inspired by the interleaved visual memory framework, which integrates fast and slow feature extractors via a shared temporal memory mechanism. Video preprocessing, annotation with predefined cardiac view codes using Labelbox, and training with TensorFlow and PyTorch will be performed. Reinforcement learning will guide the dynamic use of feature extractors during training. Iterative refinement, informed by clinical input, will ensure that the model effectively distinguishes correct from incorrect views in real time, enhancing its usability in resource-limited settings. Retrospective data collection for the project began in September 2024, and to date, data from 308 babies have been collected and labeled. In parallel, the initial model framework has been developed and training initiated using a subset of the labeled data. The project is currently in the intensive execution phase, with all objectives progressing in parallel and final results expected within 10 months. The AI-assisted echocardiography model developed in this project holds promise for improving early CHD diagnosis and care in SSA and other low-resource settings. DERR1-10.2196/75270.