π€ AI Summary
Physical evaluation of multi-task robotic policies is costly and inefficient. Method: This work models policy evaluation as an active testing problem and introduces active learning to this domain for the first time. It integrates natural-language task representations to capture semantic relationships, constructs a Bayesian performance model to dynamically estimate the policyβs performance distribution, and proposes a cost-aware expected information gain criterion compatible with both continuous and discrete performance feedback. Active experiment selection is performed jointly in simulation and on real robots for validation. Contribution/Results: The approach significantly improves coverage completeness and sampling efficiency while maintaining evaluation accuracy. Experiments demonstrate over 60% reduction in required trials compared to random evaluation, substantially lowering human intervention costs.
π Abstract
Evaluating learned robot control policies to determine their physical task-level capabilities costs experimenter time and effort. The growing number of policies and tasks exacerbates this issue. It is impractical to test every policy on every task multiple times; each trial requires a manual environment reset, and each task change involves re-arranging objects or even changing robots. Naively selecting a random subset of tasks and policies to evaluate is a high-cost solution with unreliable, incomplete results. In this work, we formulate robot evaluation as an active testing problem. We propose to model the distribution of robot performance across all tasks and policies as we sequentially execute experiments. Tasks often share similarities that can reveal potential relationships in policy behavior, and we show that natural language is a useful prior in modeling these relationships between tasks. We then leverage this formulation to reduce the experimenter effort by using a cost-aware expected information gain heuristic to efficiently select informative trials. Our framework accommodates both continuous and discrete performance outcomes. We conduct experiments on existing evaluation data from real robots and simulations. By prioritizing informative trials, our framework reduces the cost of calculating evaluation metrics for robot policies across many tasks.