🤖 AI Summary
In complex high-dimensional environments, imitation learning suffers from coarse and ineffective guidance due to discriminator policies being independent of the agent’s current capabilities. To address this, we propose a Mentor-Student collaborative framework. Our core innovation is a novel dynamic reward generation mechanism: the Mentor, trained via policy gradient methods (PPO/TRPO), continuously refines discriminative feedback to produce progressive, fine-grained rewards tailored to the Student’s evolving proficiency—overcoming the limitations of conventional binary discrimination. Technically, the framework integrates adversarial imitation learning, inverse reinforcement learning, and differentiable discriminator modeling. Evaluated on high-dimensional robotic locomotion simulation tasks, our method doubles task completion rates, significantly improves exploration efficiency, and enhances alignment with expert demonstrations.
📝 Abstract
Reinforcement Learning has achieved significant success in generating complex behavior but often requires extensive reward function engineering. Adversarial variants of Imitation Learning and Inverse Reinforcement Learning offer an alternative by learning policies from expert demonstrations via a discriminator. However, these methods struggle in complex tasks where randomly sampling expert-like behaviors is challenging. This limitation stems from their reliance on policy-agnostic discriminators, which provide insufficient guidance for agent improvement, especially as task complexity increases and expert behavior becomes more distinct. We introduce RILe (Reinforced Imitation Learning environment), a novel trainer-student system that learns a dynamic reward function based on the student's performance and alignment with expert demonstrations. In RILe, the student learns an action policy while the trainer, using reinforcement learning, continuously updates itself via the discriminator's feedback to optimize the alignment between the student and the expert. The trainer optimizes for long-term cumulative rewards from the discriminator, enabling it to provide nuanced feedback that accounts for the complexity of the task and the student's current capabilities. This approach allows for greater exploration of agent actions by providing graduated feedback rather than binary expert/non-expert classifications. By reducing dependence on policy-agnostic discriminators, RILe enables better performance in complex settings where traditional methods falter, outperforming existing methods by 2x in complex simulated robot-locomotion tasks.