🤖 AI Summary
To address low sample efficiency and insufficient trajectory exploration in human preference learning for long-horizon robotic tasks, this paper proposes CRED—the first active preference learning framework integrating counterfactual reasoning with environment design. CRED actively constructs high-information trajectory queries by generating counterfactual scenarios and adaptively tuning environment parameters; it then employs Bayesian reward sampling to jointly optimize trajectory ranking and reward function estimation. Evaluated on both GridWorld simulations and real-world navigation tasks grounded in OpenStreetMap, CRED demonstrates significant improvements: approximately 40% faster convergence in preference learning and enhanced cross-environment generalization. Moreover, under multi-objective trade-offs—including distance, time, and safety—CRED generates more robust and Pareto-optimal trajectories.
📝 Abstract
For effective real-world deployment, robots should adapt to human preferences, such as balancing distance, time, and safety in delivery routing. Active preference learning (APL) learns human reward functions by presenting trajectories for ranking. However, existing methods often struggle to explore the full trajectory space and fail to identify informative queries, particularly in long-horizon tasks. We propose CRED, a trajectory generation method for APL that improves reward estimation by jointly optimizing environment design and trajectory selection. CRED "imagines" new scenarios through environment design and uses counterfactual reasoning -- by sampling rewards from its current belief and asking "What if this reward were the true preference?" -- to generate a diverse and informative set of trajectories for ranking. Experiments in GridWorld and real-world navigation using OpenStreetMap data show that CRED improves reward learning and generalizes effectively across different environments.