🤖 AI Summary
Reinforcement learning (RL) policies for robot motor control typically require task-specific training from scratch, hindering knowledge transfer across tasks. Method: This paper proposes a pretraining paradigm based on a proprioception-aware inverse dynamics model (PIDM). Leveraging task-agnostic exploration data, PIDM is trained via supervised learning to yield a general-purpose inverse dynamics representation. The learned features serve as transferable weight initializations for both the policy and value networks within an Actor–Critic framework. Notably, this work is the first to systematically integrate pretraining into the canonical Proximal Policy Optimization (PPO) algorithm, enabling cross-task policy warm-starting. Contribution/Results: Evaluated on seven diverse robotic locomotion tasks, the approach improves sample efficiency by 40.1% on average and final performance by 7.5%. Ablation studies confirm the critical roles of PIDM modeling and weight transfer in achieving these gains.
📝 Abstract
The pretraining-finetuning paradigm has facilitated numerous transformative advancements in artificial intelligence research in recent years. However, in the domain of reinforcement learning (RL) for robot motion control, individual skills are often learned from scratch despite the high likelihood that some generalizable knowledge is shared across all task-specific policies belonging to a single robot embodiment. This work aims to define a paradigm for pretraining neural network models that encapsulate such knowledge and can subsequently serve as a basis for warm-starting the RL process in classic actor-critic algorithms, such as Proximal Policy Optimization (PPO). We begin with a task-agnostic exploration-based data collection algorithm to gather diverse, dynamic transition data, which is then used to train a Proprioceptive Inverse Dynamics Model (PIDM) through supervised learning. The pretrained weights are loaded into both the actor and critic networks to warm-start the policy optimization of actual tasks. We systematically validated our proposed method on seven distinct robot motion control tasks, showing significant benefits to this initialization strategy. Our proposed approach on average improves sample efficiency by 40.1% and task performance by 7.5%, compared to random initialization. We further present key ablation studies and empirical analyses that shed light on the mechanisms behind the effectiveness of our method.