Back to all papers

DDTracking: A diffusion model-based deep generative framework with local-global spatiotemporal modeling for diffusion MRI tractography.

January 29, 2026pubmed logopapers

Authors

Li Y,Zhang W,Zhu X,Wu Y,Rathi Y,O'Donnell LJ,Zhang F

Affiliations (5)

  • School of Information and Communication Engineering, University of Electronic Science and Technology of China, No.2006, Xiyuan Ave, Chengdu, 611731, Sichuan, China.
  • School of Computer Science and Engineering, Nanjing University of Science and Technology, No.200, Xiaolingwei St, Nanjing, 210094, Jiangsu, China.
  • Department of Psychiatry, Brigham and Women's Hospital, Harvard Medical School, 75 Francis St, Boston, MA, 02115, USA.
  • Department of Radiology, Brigham and Women's Hospital, Harvard Medical School, 75 Francis St, Boston, MA, 02115, USA.
  • School of Information and Communication Engineering, University of Electronic Science and Technology of China, No.2006, Xiyuan Ave, Chengdu, 611731, Sichuan, China. Electronic address: [email protected].

Abstract

Diffusion MRI (dMRI) tractography is an advanced technique that uniquely enables in vivo mapping of brain fiber pathways. Traditional methods rely on tissue modeling to estimate fiber orientations for streamline propagation, which are computationally intensive and remain sensitive to noise and artifacts. Recent deep learning-based approaches enable data-driven fiber tracking by directly mapping dMRI signals to orientations, demonstrating both improved efficiency and accuracy. However, existing methods typically operate by either leveraging local signal information or learning global dependencies along streamlines. This paper presents DDTracking, a deep generative framework for tractography. One key innovation is the reformulation of streamline propagation as a conditional denoising diffusion process. To the best of our knowledge, this is the first work to apply diffusion models for fiber tracking. Our network architecture incorporates two new designs, including: (1) a dual-pathway encoding scheme that extracts complementary local spatial features and global temporal context, and (2) a conditional diffusion model module that integrates the spatiotemporal features to predict propagation orientations. All components are trained jointly in an end-to-end manner without any pretraining. In this way, DDTracking can capture fine-scale structural details at each point while ensuring long-range consistency across the entire streamline. We conduct a comprehensive evaluation across diverse datasets, including both synthetic and clinical data. Experiments demonstrate that DDTracking outperforms traditional model-based and state-of-the-art deep learning-based methods in terms of tracking accuracy and computational efficiency. Furthermore, our results highlight DDTracking's high generalizability across heterogeneous datasets, spanning varying health conditions, age groups, imaging protocols, and scanner types. Code is available at: https://github.com/yishengpoxiao/DDTracking.git.

Topics

Diffusion Tensor ImagingDeep LearningImage Processing, Computer-AssistedJournal Article

Ready to Sharpen Your Edge?

Subscribe to join 11k+ peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.