Steering Robots with Inference-Time Interactions

📅 2025-06-17
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Pretrained robotic policies lack real-time error correction during deployment, while task-specific data collection and fine-tuning are inefficient. This paper introduces a novel, fine-tuning-free paradigm for interactive correction at inference time: the pretrained policy remains frozen, while online skill switching and task-motion co-editing integrate user interaction signals, symbolic task planning, and continuous motion optimization to enable zero-shot, intent-driven behavior correction. Our key contribution is the first unification—within the inference stage—of discrete skill scheduling, constraint-satisfaction reasoning, and real-time user guidance. We validate the approach on multiple simulated tasks and real robotic platforms. Results demonstrate significant improvements in task success rate and user controllability, eliminating the need for repeated data collection or model retraining.

Technology Category

Application Category

📝 Abstract
Imitation learning has driven the development of generalist policies capable of autonomously solving multiple tasks. However, when a pretrained policy makes errors during deployment, there are limited mechanisms for users to correct its behavior. While collecting additional data for finetuning can address such issues, doing so for each downstream use case is inefficient at deployment. My research proposes an alternative: keeping pretrained policies frozen as a fixed skill repertoire while allowing user interactions to guide behavior generation toward user preferences at inference time. By making pretrained policies steerable, users can help correct policy errors when the model struggles to generalize-without needing to finetune the policy. Specifically, I propose (1) inference-time steering, which leverages user interactions to switch between discrete skills, and (2) task and motion imitation, which enables user interactions to edit continuous motions while satisfying task constraints defined by discrete symbolic plans. These frameworks correct misaligned policy predictions without requiring additional training, maximizing the utility of pretrained models while achieving inference-time user objectives.
Problem

Research questions and friction points this paper is trying to address.

Enable user correction of pretrained policies without finetuning
Steer robot behavior via inference-time interactions
Edit continuous motions while respecting task constraints
Innovation

Methods, ideas, or system contributions that make the work stand out.

Inference-time steering with user interactions
Switching between discrete skills dynamically
Editing continuous motions under task constraints
🔎 Similar Papers
No similar papers found.