Locality-Aware Automatic Differentiation on the GPU for Mesh-Based Computations

📅 2025-08-30
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the low computational efficiency of gradient and Hessian evaluations in energy optimization on triangular meshes, this paper proposes a lightweight, GPU-oriented automatic differentiation (AD) framework. Departing from global computation graphs, it employs element-wise forward-mode AD, leveraging the locality and sparsity inherent in mesh energies to achieve fully localized parallel differentiation. Hardware-aware optimizations—including register-level computation, shared-memory scheduling, and automatic sparse Hessian assembly—eliminate redundant memory accesses and synchronization overhead. Experiments demonstrate that second-order derivative computation is 6.2× faster than PyTorch, and Hessian-vector products are accelerated by 2.76×. First-order derivatives outperform Warp, JAX, and Dr.JIT by 6.38×, 2.89×, and 1.98×, respectively, approaching the performance of hand-coded derivatives. The framework significantly enhances optimization efficiency for geometric computing tasks such as cloth simulation and surface parameterization.

Technology Category

Application Category

📝 Abstract
We present a high-performance system for automatic differentiation (AD) of functions defined on triangle meshes that exploits the inherent sparsity and locality of mesh-based energy functions to achieve fast gradient and Hessian computation on the GPU. Our system is designed around per-element forward-mode differentiation, enabling all local computations to remain in GPU registers or shared memory. Unlike reverse-mode approaches that construct and traverse global computation graphs, our method performs differentiation on the fly, minimizing memory traffic and avoiding global synchronization. Our programming model allows users to define local energy terms while the system handles parallel evaluation, derivative computation, and sparse Hessian assembly. We benchmark our system on a range of applications--cloth simulation, surface parameterization, mesh smoothing, and spherical manifold optimization. We achieve a geometric mean speedup of 6.2x over optimized PyTorch implementations for second-order derivatives, and 2.76x speedup for Hessian-vector products. For first-order derivatives, our system is 6.38x, 2.89x, and 1.98x faster than Warp, JAX, and Dr.JIT, respectively, while remaining on par with hand-written derivatives.
Problem

Research questions and friction points this paper is trying to address.

Accelerating automatic differentiation for mesh-based computations on GPU
Exploiting sparsity and locality in mesh energy functions
Enabling efficient gradient and Hessian computation without global graphs
Innovation

Methods, ideas, or system contributions that make the work stand out.

Per-element forward-mode differentiation on GPU
On-the-fly differentiation minimizing memory traffic
Parallel evaluation with sparse Hessian assembly
🔎 Similar Papers
No similar papers found.