A unified deep learning framework for cross-platform harmonization of multi-tracer PET quantification
Authors
Affiliations (1)
Affiliations (1)
- Department of Nuclear Medicine/PET center, Huashan Hospital, Fudan University, Shanghai, China
Abstract
Quantitative PET underpins diagnosis and treatment monitoring in neurodegenerative disease, yet systematic biases between PET-MRI and PET-CT preclude threshold transfer and cross-site comparability. We present a unified, anatomically guided deep-learning framework that harmonizes multi-tracer PET-MRI to PET-CT. The model learns CT-anchored attenuation representations with a Vision Transformer Autoencoder, aligns MRI features to CT space via contrastive objectives, and performs attention-guided residual correction. In paired same-day scans (N = 70; amyloid, tau, FDG), cross-platform bias fell by >80% while preserving inter-regional biological topology. The framework generalized zero-shot to held-out tracers (18F-florbetapir; 18F-FP-CIT) without retraining. Multicentre validation (N = 420; three sites, four vendors) reduced amyloid Centiloid discrepancies from 23.6 to 4.1 (within PET-CT test-retest precision) and aligned tau SUVR thresholds. These results enable platform-agnostic diagnostic cutoffs and reliable longitudinal monitoring when patients transition between modalities, establishing a practical route to scalable, radiation-sparing quantitative PET in therapeutic workflows.