CRED: Counterfactual Reasoning and Environment Design for Active Preference Learning

📅 2025-07-07
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address low sample efficiency and insufficient trajectory exploration in human preference learning for long-horizon robotic tasks, this paper proposes CRED—the first active preference learning framework integrating counterfactual reasoning with environment design. CRED actively constructs high-information trajectory queries by generating counterfactual scenarios and adaptively tuning environment parameters; it then employs Bayesian reward sampling to jointly optimize trajectory ranking and reward function estimation. Evaluated on both GridWorld simulations and real-world navigation tasks grounded in OpenStreetMap, CRED demonstrates significant improvements: approximately 40% faster convergence in preference learning and enhanced cross-environment generalization. Moreover, under multi-objective trade-offs—including distance, time, and safety—CRED generates more robust and Pareto-optimal trajectories.

Technology Category

Application Category

📝 Abstract
For effective real-world deployment, robots should adapt to human preferences, such as balancing distance, time, and safety in delivery routing. Active preference learning (APL) learns human reward functions by presenting trajectories for ranking. However, existing methods often struggle to explore the full trajectory space and fail to identify informative queries, particularly in long-horizon tasks. We propose CRED, a trajectory generation method for APL that improves reward estimation by jointly optimizing environment design and trajectory selection. CRED "imagines" new scenarios through environment design and uses counterfactual reasoning -- by sampling rewards from its current belief and asking "What if this reward were the true preference?" -- to generate a diverse and informative set of trajectories for ranking. Experiments in GridWorld and real-world navigation using OpenStreetMap data show that CRED improves reward learning and generalizes effectively across different environments.
Problem

Research questions and friction points this paper is trying to address.

Robots adapting to human preferences in real-world tasks
Improving active preference learning for reward estimation
Generating diverse trajectories via counterfactual reasoning
Innovation

Methods, ideas, or system contributions that make the work stand out.

Jointly optimizes environment design and trajectory selection
Uses counterfactual reasoning for diverse trajectory generation
Improves reward learning through imagined scenarios
🔎 Similar Papers
No similar papers found.