🤖 AI Summary
Most existing diffusion MRI tractography methods are non-differentiable, hindering their integration into end-to-end deep learning frameworks. This paper introduces the first fully differentiable streamline propagator, implemented in PyTorch, which strictly preserves numerical equivalence with standard ODE solvers (e.g., Euler and Runge–Kutta) and enables full backpropagation of gradients through the entire tractography pipeline. Unlike prior approaches, our method achieves a differentiable reconstruction of conventional white matter tractography without compromising anatomical fidelity. It establishes the first end-to-end learnable mapping from dMRI data to macroscopic structural connectomes. Experiments demonstrate near-identical tractography performance compared to standard non-differentiable implementations—achieving mean angular error <1.5° and streamline overlap >92%. This work provides a foundational tool for differentiable brain connectomics, microstructural inference, and multimodal neuroimaging integration.
📝 Abstract
Diffusion MRI (dMRI) provides a distinctive means to probe the microstructural architecture of living tissue, facilitating applications such as brain connectivity analysis, modeling across multiple conditions, and the estimation of macrostructural features. Tractography, which emerged in the final years of the 20th century and accelerated in the early 21st century, is a technique for visualizing white matter pathways in the brain using dMRI. Most diffusion tractography methods rely on procedural streamline propagators or global energy minimization methods. Although recent advancements in deep learning have enabled tasks that were previously challenging, existing tractography approaches are often non-differentiable, limiting their integration in end-to-end learning frameworks. While progress has been made in representing streamlines in differentiable frameworks, no existing method offers fully differentiable propagation. In this work, we propose a fully differentiable solution that retains numerical fidelity with a leading streamline algorithm. The key is that our PyTorch-engineered streamline propagator has no components that block gradient flow, making it fully differentiable. We show that our method matches standard propagators while remaining differentiable. By translating streamline propagation into a differentiable PyTorch framework, we enable deeper integration of tractography into deep learning workflows, laying the foundation for a new category of macrostructural reasoning that is not only computationally robust but also scientifically rigorous.