Inference-Time Policy Steering through Human Interactions

📅 2024-11-25
🏛️ arXiv.org
📈 Citations: 1
Influential: 0
📄 PDF
🤖 AI Summary
To address the challenge that generative policies struggle to dynamically adapt to real-time human intent during inference—leading to distributional shift and execution failure—this paper proposes a fine-tuning-free, human-in-the-loop inference guidance framework. Our method directly reweights the stochastic sampling process of diffusion policies using interaction signals, incorporating a multimodal alignment distance metric that jointly evaluates trajectory shape, subgoal consistency, and physical feasibility. This enables precise intent grounding while preserving in-distribution stability. Evaluated on three simulation and real-world benchmarks, our stochastic diffusion sampling strategy achieves the best trade-off between alignment accuracy and distributional shift mitigation among six competing methods, significantly reducing execution failure rates. To the best of our knowledge, this is the first framework to embed human interaction directly into the diffusion policy’s inference process without requiring interaction-data-based fine-tuning.

Technology Category

Application Category

📝 Abstract
Generative policies trained with human demonstrations can autonomously accomplish multimodal, long-horizon tasks. However, during inference, humans are often removed from the policy execution loop, limiting the ability to guide a pre-trained policy towards a specific sub-goal or trajectory shape among multiple predictions. Naive human intervention may inadvertently exacerbate distribution shift, leading to constraint violations or execution failures. To better align policy output with human intent without inducing out-of-distribution errors, we propose an Inference-Time Policy Steering (ITPS) framework that leverages human interactions to bias the generative sampling process, rather than fine-tuning the policy on interaction data. We evaluate ITPS across three simulated and real-world benchmarks, testing three forms of human interaction and associated alignment distance metrics. Among six sampling strategies, our proposed stochastic sampling with diffusion policy achieves the best trade-off between alignment and distribution shift. Videos are available at https://yanweiw.github.io/itps/.
Problem

Research questions and friction points this paper is trying to address.

Align generative policies with human intent during inference
Prevent distribution shift from naive human intervention
Steer policy output using interactions without fine-tuning
Innovation

Methods, ideas, or system contributions that make the work stand out.

Leverages human interactions for policy steering
Biases generative sampling without fine-tuning
Uses stochastic sampling with diffusion policy
🔎 Similar Papers
2024-09-30International Conference on Human-Agent InteractionCitations: 1