DiffOG: Differentiable Policy Trajectory Optimization with Generalizability

📅 2025-04-18
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Vision-based motor policies (e.g., imitation learning) often produce unsmooth, constraint-violating action trajectories with limited interpretability, hindering deployment in safety-critical applications. To address this, we propose DiffOG: a differentiable trajectory optimization framework that couples neural policies with a generalizable, differentiable Transformer-based optimization layer for end-to-end joint training. Our key contribution is the first differentiable and generalizable trajectory optimization layer, enabling explicit or implicit constraint modeling while yielding structurally interpretable action sequences. Evaluated on 11 simulated and 2 real-world robotic tasks, DiffOG significantly improves trajectory smoothness and constraint satisfaction rates over baselines—including greedy clipping and penalty-based optimization—while preserving the original policy’s task performance.

Technology Category

Application Category

📝 Abstract
Imitation learning-based visuomotor policies excel at manipulation tasks but often produce suboptimal action trajectories compared to model-based methods. Directly mapping camera data to actions via neural networks can result in jerky motions and difficulties in meeting critical constraints, compromising safety and robustness in real-world deployment. For tasks that require high robustness or strict adherence to constraints, ensuring trajectory quality is crucial. However, the lack of interpretability in neural networks makes it challenging to generate constraint-compliant actions in a controlled manner. This paper introduces differentiable policy trajectory optimization with generalizability (DiffOG), a learning-based trajectory optimization framework designed to enhance visuomotor policies. By leveraging the proposed differentiable formulation of trajectory optimization with transformer, DiffOG seamlessly integrates policies with a generalizable optimization layer. Visuomotor policies enhanced by DiffOG generate smoother, constraint-compliant action trajectories in a more interpretable way. DiffOG exhibits strong generalization capabilities and high flexibility. We evaluated DiffOG across 11 simulated tasks and 2 real-world tasks. The results demonstrate that DiffOG significantly enhances the trajectory quality of visuomotor policies while having minimal impact on policy performance, outperforming trajectory processing baselines such as greedy constraint clipping and penalty-based trajectory optimization. Furthermore, DiffOG achieves superior performance compared to existing constrained visuomotor policy.
Problem

Research questions and friction points this paper is trying to address.

Improving trajectory quality in visuomotor policies
Ensuring constraint compliance in neural network actions
Enhancing interpretability of learning-based trajectory optimization
Innovation

Methods, ideas, or system contributions that make the work stand out.

Differentiable trajectory optimization with transformer
Generalizable optimization layer integration
Constraint-compliant action trajectory generation