U-Net and YOLOv8 Artificial Intelligence Models for Automated Recognition of Internal Jugular Veins and Radial Arteries: A Foundational Study for Artificial Intelligence-guided Vascular Cannulation inPoint-of-care Ultrasound.
Authors
Affiliations (12)
Affiliations (12)
- Department of Ultrasound Medicine, the Second Affiliated Hospital of Anhui Medical University, Hefei, Anhui, China. Electronic address: [email protected].
- Department of Ultrasound Medicine, the Second Affiliated Hospital of Anhui Medical University, Hefei, Anhui, China. Electronic address: [email protected].
- Department of Ultrasound Medicine, the Second Affiliated Hospital of Anhui Medical University, Hefei, Anhui, China.
- Department of Anesthesiology and Perioperative Medicine, the Second Affiliated Hospital of Anhui Medical University, Hefei, China.
- Department of Ultrasound, Anhui Chest Hospital, Hefei, China.
- Department of Anesthesiology, the First Affiliated Hospital of Anhui University of Science & Technology, Huainan, China.
- Department of Functional Diagnostics, Huaibei People's Hospital, Huaibei, China.
- Department of Science and Education, Huaibei People's Hospital, Huaibei, China.
- Department of Ultrasound Medicine, the First Affiliated Hospital of Anhui Medical University, Hefei, China.
- Department of Clinical Medicine, the Second School of Clinical Medicine, Anhui Medical University,Hefei, China.
- Department of Ultrasound Medicine, the Second Affiliated Hospital of Anhui Medical University, Hefei, Anhui, China. Electronic address: [email protected].
- Department of Ultrasound Medicine, the Second Affiliated Hospital of Anhui Medical University, Hefei, Anhui, China. Electronic address: [email protected].
Abstract
This study compares the U-Net and You Only Look Once version 8 (YOLOv8) models for identifying internal jugular veins (IJVs) and radial arteries (RAs) in longitudinal and/or transversal ultrasound views, evaluating their vascular recognition capabilities for artificial intelligence-guided cannulation systems under point-of-care ultrasound (POCUS) visualization. A retrospective study. Six teaching hospitals. Data from 1,122 ultrasound images (612 IJVs, 510 RAs) between January and December 2024. No intervention. U-Net was employed for pixelwise segmentation with a combined dice-cross-entropy loss, while YOLOv8s-seg incorporated attention mechanisms for object detection. Model performance was evaluated using precision, recall, F1-score, mean average precision at intersection over union (IoU) threshold 0.5 ([email protected]), IoU, Dice coefficient, and inference time metrics. YOLOv8 demonstrated superior performance with a precision of 0.996, recall of 1.00, [email protected] of 0.995, IoU of 0.739, Dice coefficient of 0.834, and inference time of 26.3 ms, outperforming U-Net, which achieved a precision of 0.988, recall of 0.998, [email protected] of 0.993, IoU of 0.719, Dice coefficient of 0.816, and inference time of 38.9 ms. In the validation set, YOLOv8 exhibited higher accuracy across all categories, with values of 0.96 for V-S, 0.90 for V-L, 0.89 for A-S, and 0.86 for A-L, compared with U-Net's respective accuracies of 0.81, 0.88, 0.87, and 0.82. YOLOv8 outperformed U-Net in accuracy, precision, and speed, proving more suitable for real-time vascular localization in clinical settings. These findings provide critical algorithmic foundations for developing automated vascular puncture navigation platforms in the future.