Back to all papers

De-identification Strategy and Re-identification Risks for Facial Computed Tomography Images via Deep Learning.

February 25, 2026pubmed logopapers

Authors

Kang SU,Kim I,Park SW,Kim WJ,Jang JW,Sung K

Affiliations (8)

  • Department of Medical Information, Kangwon National University Hospital, Chuncheon, Republic of Korea.
  • Department of Convergence Security, Kangwon National University, Chuncheon, Republic of Korea.
  • Department of Plastic Surgery, Kangwon National University Hospital, Chuncheon, Republic of Korea.
  • Research Institute, Biolink Inc, Daegu, Republic of Korea.
  • Department of Internal Medicine, Kangwon National University Hospital, Chuncheon, Republic of Korea.
  • Department of Internal Medicine, School of Medicine, Kangwon National University, Chuncheon, Republic of Korea.
  • Department of Neurology, Kangwon National University Hospital, Chuncheon, Republic of Korea.
  • Department of Plastic Surgery, School of Medicine, Kangwon National University, Gangwon State, Gangwondaehak-Gil 1, Chuncheon, 24341, Republic of Korea. [email protected].

Abstract

The aim is to develop and evaluate a deep learning-based selective de-identification method for head computed tomography (CT) images that removes facial soft-tissue features while preserving facial bone structures, and to assess the re-identification risk after de-identification to ensure effective privacy protection. This retrospective study included 3206 facial CT scans (308,982 images) from 3091 patients with facial bone fractures acquired at a single hospital. All CT images were processed with a YOLOv8-based model that selectively removed facial soft-tissue features. The de-identified 2D slices and their original counterparts were reconstructed into 3D facial models, which were aligned and normalized for subsequent re-identification analysis. Re-identification risk was assessed using cosine similarity of deep learning-based facial embeddings and through a human assessment comparing general participants and plastic surgeons. The model demonstrated high accuracy in detecting and removing facial features, achieving a mAP 0.5 of 0.858. Deep learning-based re-identification accuracy decreased from 85 to 64% when comparing original and de-identified images. In the blind human re-identification assessment, correct identification rates declined from 84 to 55%, with similar reductions observed in both general participants (84 to 55%) and plastic surgeons (83 to 54%), indicating no substantial difference between the groups. This study developed a selective de-identification method for facial CT images that preserves craniofacial structures while reducing re-identification risk. The method demonstrated significant privacy enhancement with minimal impact on data utility. To support broader adoption among facial CT researchers, we have made the de-identification model and a ready-to-run demo publicly available on GitHub.

Topics

Journal Article

Ready to Sharpen Your Edge?

Subscribe to join 11k+ peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.