๐ค AI Summary
To address the high computational cost, slow training, and convergence difficulties of vision-based policy learning in complex tasks, this paper proposes a parallel differentiable simulation framework that decouples rendering from the computational graph. Its core contribution is the first full decoupling of the rendering process from the gradient computation graphโenabling seamless integration into existing differentiable simulation ecosystems without requiring specialized differentiable renderers. We introduce a gradient norm decay mechanism to significantly improve optimization stability, and combine parallel physics simulation, first-order analytical policy gradients, and a GPU-accelerated rendering pipeline. Experiments on standard visual control benchmarks demonstrate substantial speedups: single-GPU training time is drastically reduced; final return on a humanoid walking task improves by 4ร; and a stable running policy is learned in only four hours.
๐ Abstract
In this work, we propose a computationally efficient algorithm for visual policy learning that leverages differentiable simulation and first-order analytical policy gradients. Our approach decouple the rendering process from the computation graph, enabling seamless integration with existing differentiable simulation ecosystems without the need for specialized differentiable rendering software. This decoupling not only reduces computational and memory overhead but also effectively attenuates the policy gradient norm, leading to more stable and smoother optimization. We evaluate our method on standard visual control benchmarks using modern GPU-accelerated simulation. Experiments show that our approach significantly reduces wall-clock training time and consistently outperforms all baseline methods in terms of final returns. Notably, on complex tasks such as humanoid locomotion, our method achieves a $4 imes$ improvement in final return, and successfully learns a humanoid running policy within 4 hours on a single GPU.