🤖 AI Summary
In real-world robotic tasks, expert demonstrations are scarce and often suboptimal, particularly in long-horizon, adversarial settings where hierarchical structure is essential.
Method: This paper proposes a preference-based hierarchical inverse reinforcement learning (IRL) framework tailored for long-horizon and adversarial tasks. It extends preference learning to such settings for the first time, integrating suboptimal hierarchical demonstration modeling, layer-wise reward disentanglement, sample-efficient preference inference, and sim-to-real transfer to enhance deployment robustness.
Contribution/Results: Unlike conventional IRL methods requiring optimal expert trajectories, our framework relaxes this stringent assumption, significantly improving reward function inference accuracy and generalizability. Evaluated on a simulated maritime offense-defense task, it outperforms state-of-the-art baselines. Further validated via sim-to-real experiments on an unmanned surface vehicle, the framework demonstrates practical feasibility and effectiveness in real-world deployment.
📝 Abstract
Inverse Reinforcement Learning (IRL) presents a powerful paradigm for learning complex robotic tasks from human demonstrations. However, most approaches make the assumption that expert demonstrations are available, which is often not the case. Those that allow for suboptimality in the demonstrations are not designed for long-horizon goals or adversarial tasks. Many desirable robot capabilities fall into one or both of these categories, thus highlighting a critical shortcoming in the ability of IRL to produce field-ready robotic agents. We introduce Sample-efficient Preference-based inverse reinforcement learning for Long-horizon Adversarial tasks from Suboptimal Hierarchical demonstrations (SPLASH), which advances the state-of-the-art in learning from suboptimal demonstrations to long-horizon and adversarial settings. We empirically validate SPLASH on a maritime capture-the-flag task in simulation, and demonstrate real-world applicability with sim-to-real translation experiments on autonomous unmanned surface vehicles. We show that our proposed methods allow SPLASH to significantly outperform the state-of-the-art in reward learning from suboptimal demonstrations.