Back to all papers

A unified deep learning framework for cross-platform harmonization of multi-tracer PET quantification in neurodegenerative disease.

March 30, 2026pubmed logopapers

Authors

Wang J,Zhong A,Xu Q,Huang H,Zhu Y,Lu J,Wang M,Jiang J,Li C,Ni M,Sun K,Guan Y,Lu J,Tian M,Shen D,Zhang H,Wang Q,Zuo C

Affiliations (14)

  • Department of Nuclear Medicine/PET center, Huashan Hospital, Fudan University, Shanghai, China.
  • School of Biomedical Engineering & State Key Laboratory of Advanced Medical Materials and Devices, ShanghaiTech University, Shanghai, China.
  • Shanghai Clinical Research and Trial Center, Shanghai, China.
  • School of Life Sciences, Shanghai University, Shanghai, China.
  • Department of Nuclear Medicine, Division of Life Sciences and Medicine, The First Affiliated Hospital of USTC, University of Science and Technology of China, Hefei, China.
  • Department of Radiology and Nuclear Medicine, Xuanwu Hospital, Capital Medical University, Beijing, China.
  • Beijing Key Laboratory of Magnetic Resonance Imaging and Brain Informatics, Beijing, China.
  • Key laboratory of Neurodegenerative diseases, Ministry of Education, Beijing, China.
  • Human Phenome Institute, Fudan University, Shanghai, China.
  • Shanghai United Imaging Intelligence Co. Ltd., Shanghai, China.
  • Department of Nuclear Medicine/PET center, Huashan Hospital, Fudan University, Shanghai, China. [email protected].
  • School of Biomedical Engineering & State Key Laboratory of Advanced Medical Materials and Devices, ShanghaiTech University, Shanghai, China. [email protected].
  • Shanghai Clinical Research and Trial Center, Shanghai, China. [email protected].
  • Department of Nuclear Medicine/PET center, Huashan Hospital, Fudan University, Shanghai, China. [email protected].

Abstract

Quantitative PET underpins diagnosis and treatment monitoring in neurodegenerative disease, yet systematic biases between PET-MRI and PET-CT preclude threshold transfer and cross-site comparability. We developed and validated the first unified, anatomically guided deep-learning framework to harmonize PET-MRI quantification to PET-CT standards across multiple tracers and scanner manufacturers. The model learns CT-anchored attenuation representations using a vision transformer autoencoder, aligns MRI features to the CT space via contrastive objectives, and performs attention-guided residual correction. In paired same-day scans (N = 70; <sup>18</sup>F-FDG, <sup>18</sup>F-florbetaben, and <sup>18</sup>F-florzolotau), cross-platform bias fell by >80% while preserving inter-regional biological topology. The framework generalized zero-shot to held-out tracers (<sup>18</sup>F-florbetapir and <sup>18</sup>F-FP-CIT) without retraining. Multicenter validation (N = 420; three sites, four vendors) reduced amyloid Centiloid discrepancies from 23.6 to 4.1 (close to, though slightly above, PET-CT test-retest variability) and aligned tau SUVR thresholds. These results support more consistent cross-platform diagnostic cut-offs and reliable longitudinal monitoring when patients transition between modalities, establishing a practical route to scalable, radiation-sparing quantitative PET in therapeutic workflows.

Topics

Journal Article

Ready to Sharpen Your Edge?

Subscribe to join 11k+ peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.