🤖 AI Summary
To address the trade-off between high energy consumption and prohibitively long computation time of traditional optimal control problem (OCP) solvers in real-time energy-efficient trajectory planning for industrial robots, this paper proposes a constraint-aware residual learning paradigm. Rather than directly regressing optimal trajectories, our method learns physically feasible correction terms that map nominal trajectories to OCP-optimal solutions. The approach integrates kinematic and dynamic modeling, optimal control theory, and supervised learning—trained on high-fidelity OCP-generated data—using a lightweight residual neural network for rapid inference. Compared to conventional OCP solvers, our method achieves 2–3 orders-of-magnitude speedup in inference time. Within the training distribution, it attains 87.3% of the optimal energy efficiency; under out-of-distribution conditions, it retains 50.8% relative performance. The framework thus simultaneously ensures real-time capability, strong generalization, and physical implementability.
📝 Abstract
Industrial robotics demands significant energy to operate, making energy-reduction methodologies increasingly important. Strategies for planning minimum-energy trajectories typically involve solving nonlinear optimal control problems (OCPs), which rarely cope with real-time requirements. In this paper, we propose a paradigm for generating near minimum-energy trajectories for manipulators by learning from optimal solutions. Our paradigm leverages a residual learning approach, which embeds boundary conditions while focusing on learning only the adjustments needed to steer a standard solution to an optimal one. Compared to a computationally expensive OCP-based planner, our paradigm achieves 87.3% of the performance near the training dataset and 50.8% far from the dataset, while being two to three orders of magnitude faster.