Explainable transfer learning ensemble AI model for lung ultrasound pneumothorax detection with expert benchmark.
Authors
Affiliations (13)
Affiliations (13)
- Department of Military, Disaster and Law Enforcement Medicine, Semmelweis University, P.O.B. 2, Budapest, 1428, Hungary. [email protected].
- Department of Anesthesiology and Intensive Therapy, Semmelweis University, P.O.B. 2, Budapest, H-1428, Hungary. [email protected].
- University Research and Innovation Center, John von Neumann Faculty of Informatics, Óbuda University, Bécsi Út 96/B, Budapest, Hungary. [email protected].
- University Research and Innovation Center, John von Neumann Faculty of Informatics, Óbuda University, Bécsi Út 96/B, Budapest, Hungary.
- Department of Surgery, Transplantation and Gastroenterology, Semmelweis University, P.O.B. 2, Budapest, 1428, Hungary.
- Department of Anaesthesiology, Intensive Care and Pain Medicine, University Hospital Limerick, St Nessan's Road, Dooradoyle, Limerick, V94 F858, Ireland.
- Department of Intensive Therapy, Semmelweis University, P.O.B. 2, Budapest, 1428, Hungary.
- PSI Pain Clinic, MEDCITY Health Center, Budapest, Hungary.
- Department of Anatomy, Histology and Embryology, Semmelweis University, P.O.B. 2, Budapest, 1428, Hungary.
- Physiological Controls Research Center, Óbuda University, Bécsi Út 96/B, Budapest, Hungary.
- Department of Statistics, Corvinus University of Budapest, Fővám Tér 8, Budapest, Hungary.
- Laboratory for Percutaneous Surgery, School of Computing, Queen's University, Kingston, ON, 2K7L 2N8, Canada.
- Austrian Center for Medical Innovation and Technology, Viktor-Kalpan-Str. 2., Wiener Neustadt, 2700, Austria.
Abstract
Lung ultrasound is essential for rapid, radiation-free bedside pneumothorax diagnosis but limited by variability in human interpretation. Key gaps include insufficiently large and diverse human datasets, inconsistent image acquisition, lack of rigorous expert benchmarking, and inadequate clinical interpretability of existing artificial intelligence models. We aimed to develop and validate a robust, explainable artificial intelligence (AI) ensemble model addressing these critical gaps. With our multidisciplinary team, we developed an explainable soft-voting ensemble model trained on 1,856 diverse ultrasound clips from critically ill patients, healthy volunteers, and tailored cadaver models. Model interpretability was ensured using visualization, with heatmaps validated by expert clinicians. The model's diagnostic performance was rigorously benchmarked against 11 experienced clinicians using an independent, balanced test set. Statistical analyses included sensitivity, specificity and inter-rater reliability. The ensemble model achieved 100% sensitivity (95% CI: 85·8%-100·0%) and 100% specificity (95% CI: 85·8%-100·0%), surpassing expert sensitivity and specificity. Diagnostic performance of experts significantly differed by ultrasound mode, with notably lower specificity in M-mode imaging (p < 0·001). The AI consistently maintained perfect sensitivity and significantly reduced false positives compared to clinicians across all conditions, including challenging diagnostic scenarios (e.g., subtle pleural motions), and showed excellent generalizability to both cadaveric and clinical cases. Our explainable AI ensemble robustly matches the consensus-level performance of an expert "committee," significantly reducing diagnostic variability and false-positive diagnoses. This AI tool can serve as a critical second reader, standardize clinical decisions at the bedside, and substantially improve patient safety.