🤖 AI Summary
In autonomous driving and robotics, AI systems must infer decision-making intent from minimal human demonstrations—a key challenge in inverse reinforcement learning (IRL). Method: This paper proposes a trajectory-level active IRL framework. Unlike conventional state-wise querying, we introduce an information-theoretic (KL divergence and entropy reduction) trajectory-level query strategy, integrated with Bayesian inference for efficient preference acquisition, and develop a scalable approximation algorithm to reduce computational overhead. Contribution/Results: Evaluated in grid-world environments, our method substantially reduces the number of required demonstrations—by up to 62%—while improving reward function inference accuracy and sample efficiency. By shifting active learning from state-level to trajectory-level granularity, this work overcomes a fundamental resolution bottleneck in active IRL and establishes a novel paradigm for human–robot preference alignment in complex, real-world scenarios.
📝 Abstract
As AI systems become increasingly autonomous, aligning their decision-making to human preferences is essential. In domains like autonomous driving or robotics, it is impossible to write down the reward function representing these preferences by hand. Inverse reinforcement learning (IRL) offers a promising approach to infer the unknown reward from demonstrations. However, obtaining human demonstrations can be costly. Active IRL addresses this challenge by strategically selecting the most informative scenarios for human demonstration, reducing the amount of required human effort. Where most prior work allowed querying the human for an action at one state at a time, we motivate and analyse scenarios where we collect longer trajectories. We provide an information-theoretic acquisition function, propose an efficient approximation scheme, and illustrate its performance through a set of gridworld experiments as groundwork for future work expanding to more general settings.