Back to all papers

Deep learning model for automated identification of ventrally positioned right hepatic artery in contrast-enhanced computed tomography of pediatric congenital biliary dilatation: development and clinical application.

March 23, 2026pubmed logopapers

Authors

Luo J,Wang H,Huang K,Diao M,Li L

Affiliations (5)

  • Department of general surgery, Capital Center for Children's Health, Capital Medical University, Beijing, China.
  • Capital Institute of Pediatrics, Chinese Academy of Medical Sciences & Peking Union Medical College, Beijing, China.
  • Department of General Surgery, Wuhan Children's Hospital (Wuhan Maternal and Child Healthcare Hospital), Tongji Medical College, Huazhong University of Science & Technology, Wuhan, China.
  • Department of general surgery, Capital Center for Children's Health, Capital Medical University, Beijing, China. [email protected].
  • Department of Pediatric Surgery, Tsinghua University Affiliated Beijing Tsinghua Changgung Hospital, Beijing, China. [email protected].

Abstract

Preoperative identification of a ventrally positioned right hepatic artery (vRHA) is critical in congenital biliary dilatation (CBD), as unrecognized variants increase surgical risk. Detection on computed tomography (CT) is challenging in routine pediatric practice. To develop and validate a You Only Look Once version 12 (YOLOv12)-based model for vRHA identification in contrast-enhanced CT using a targeted key-slice strategy. In this retrospective single-center study, 232 CBD patients (116 vRHA, 116 controls) were divided into training (n=186) and test (n=46) sets. Five YOLOv12 sub-models were trained as second-stage classifiers using 1,452 radiologist-selected key arterial-phase slices. Performance was assessed by precision, recall, F1-score, mean average precision (mAP), and area under the curve (AUC). Diagnostic performance was compared with two radiologists using DeLong's test. All sub-models showed perfect precision (1.000) with recall ranging from 0.684 to 0.895. YOLOv12n achieved the best performance (recall 0.842, F1-score 0.914, mAP50 0.989, AUC 0.977; 95% confidence interval, 0.913-1.000). It significantly outperformed the junior radiologist (AUC 0.737, P<0.001) and demonstrated comparable performance to the senior radiologist (AUC 0.947, P=0.515). The YOLOv12n model achieved excellent diagnostic performance for vRHA identification on key CT slices and performed at a senior radiologist level, supporting its potential role in preoperative assessment.

Topics

Journal Article

Ready to Sharpen Your Edge?

Subscribe to join 11k+ peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.