🤖 AI Summary
This work addresses the challenge of reward function generalization in cross-task robotic skill transfer. We propose the first abstraction-based, transferable reward inverse learning framework, which jointly models state abstraction and cross-domain policy distillation via inverse reinforcement learning (IRL). Leveraging variational inference, our method learns task-agnostic, shared reward representations that decouple and reuse reward structures across semantically related tasks—overcoming the fundamental limitation of conventional IRL methods, which are confined to single-task reward fitting. Extensive experiments on OpenAI Gym and AssistiveGym benchmarks demonstrate that the learned abstract rewards successfully transfer to unseen tasks, yielding an average 37% improvement in policy success rates. These results empirically validate the framework’s strong generalization capability and establish a new paradigm for scalable, reusable reward learning in robotics.
📝 Abstract
Inverse reinforcement learning (IRL) has progressed significantly toward accurately learning the underlying rewards in both discrete and continuous domains from behavior data. The next advance is to learn {em intrinsic} preferences in ways that produce useful behavior in settings or tasks which are different but aligned with the observed ones. In the context of robotic applications, this helps integrate robots into processing lines involving new tasks (with shared intrinsic preferences) without programming from scratch. We introduce a method to inversely learn an abstract reward function from behavior trajectories in two or more differing instances of a domain. The abstract reward function is then used to learn task behavior in another separate instance of the domain. This step offers evidence of its transferability and validates its correctness. We evaluate the method on trajectories in tasks from multiple domains in OpenAI's Gym testbed and AssistiveGym and show that the learned abstract reward functions can successfully learn task behaviors in instances of the respective domains, which have not been seen previously.