Back to all papers

AI-Generated Synthetic Panoramic Radiograph for Enhanced Dental Image Analysis.

March 9, 2026pubmed logopapers

Authors

Fu X,Li X,Delamare E,Huang Z,Alves Rabelo K,Bi L,Kim J

Affiliations (5)

  • Biomedical Data Analysis and Visualisation (BDAV) Lab, School of Computer Science, The University of Sydney, Sydney, Australia.
  • Institute of Translational Medicine, National Center for Translational Medicine (Shanghai), Shanghai Jiao Tong University, Shanghai, China.
  • Sydney Dental School, Faculty of Medicine and Health, The University of Sydney, Sydney, Australia.
  • Institute of Translational Medicine, National Center for Translational Medicine (Shanghai), Shanghai Jiao Tong University, Shanghai, China. [email protected].
  • Biomedical Data Analysis and Visualisation (BDAV) Lab, School of Computer Science, The University of Sydney, Sydney, Australia. [email protected].

Abstract

Synthetic image data has emerged as a powerful tool in artificial intelligence (AI) enabled medical image analysis, providing scalable solutions to challenges such as data scarcity, class imbalance, and privacy-preserving. Despite rapid advances, there is a paucity of research in the optimal generation and integration of synthetic and real data for medical image AI applications. In this study, we propose a new fusion framework for synthetic-real data integration in panoramic radiograph (PR) analysis across three dental tasks: (i) full-mouth segmentation, (ii) abnormality segmentation, and (iii) multi-label disease classification. A clinically guided conditional generative adversarial network (GAN) architecture is introduced that generates synthetic datasets at two resolutions to explore fidelity-efficiency trade-offs. The generated synthetic PRs were evaluated across four fusion strategies comprising real-only, matched-distribution, class-balancing, and synthetic-only, using convolutional neural networks (CNNs) and vision foundation models (FMs) pipelines. Our results using three public datasets demonstrated that high-resolution (512 × 512) synthetics substantially improved abnormality segmentation, while lower-resolution (256 × 256) remained sufficient for full-mouth segmentation at 40% lower training cost. Synthetic-only models retained greater than or equal to 93% of real-only performance across tasks and resolutions enabling privacy-preserving training with minimal compromise. Fine-tuning FMs with synthetic-real data fusion improved zero-shot abnormality segmentation performance by up to 17%, particularly benefiting from dataset rebalancing fusion strategies. Blinded clinical evaluation confirmed that higher-resolution synthetic PRs were visually plausible and often indistinguishable from real data. Based on our findings, we offer practical recommendations for task-aligned conditioning, resolution selection, and fusion strategy to support robust, equitable, and privacy-preserving medical image analysis.

Topics

Journal Article

Ready to Sharpen Your Edge?

Subscribe to join 11k+ peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.