Using High-Level Patterns to Estimate How Humans Predict a Robot will Behave

📅 2024-09-20
🏛️ arXiv.org
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Human predictions of robot behavior are inherently high-level and coarse-grained—focusing on semantic intent rather than precise trajectories—yet most existing interactive models assume accurate trajectory-level prediction, compromising safety and naturalness. To address this, we propose a behavior-level prediction framework grounded in second-order theory of mind: it maps human–robot trajectories into a discrete behavioral-type latent space and decodes them into semantically consistent vector fields, enabling robots to infer how humans predict their high-level intentions (e.g., “maintain lane”). This work is the first to systematically incorporate the cognitive characteristics of human predictive reasoning into human–robot interaction modeling. Experiments on both simulated and real-world driving datasets demonstrate that our framework achieves significantly higher fidelity to actual human judgments compared to conventional trajectory-prediction baselines, thereby enhancing interaction safety and fluency.

Technology Category

Application Category

📝 Abstract
Humans interacting with robots often form predictions of what the robot will do next. For instance, based on the recent behavior of an autonomous car, a nearby human driver might predict that the car is going to remain in the same lane. It is important for the robot to understand the human's prediction for safe and seamless interaction: e.g., if the autonomous car knows the human thinks it is not merging -- but the autonomous car actually intends to merge -- then the car can adjust its behavior to prevent an accident. Prior works typically assume that humans make precise predictions of robot behavior. However, recent research on human-human prediction suggests the opposite: humans tend to approximate other agents by predicting their high-level behaviors. We apply this finding to develop a second-order theory of mind approach that enables robots to estimate how humans predict they will behave. To extract these high-level predictions directly from data, we embed the recent human and robot trajectories into a discrete latent space. Each element of this latent space captures a different type of behavior (e.g., merging in front of the human, remaining in the same lane) and decodes into a vector field across the state space that is consistent with the underlying behavior type. We hypothesize that our resulting high-level and course predictions of robot behavior will correspond to actual human predictions. We provide initial evidence in support of this hypothesis through proof-of-concept simulations, testing our method's predictions against those of real users, and experiments on a real-world interactive driving dataset.
Problem

Research questions and friction points this paper is trying to address.

Estimate human predictions of robot behavior using high-level patterns.
Develop a second-order theory of mind for robot-human interaction.
Validate high-level behavior predictions through simulations and real-world data.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Second-order theory of mind approach
Discrete latent space embedding
High-level behavior prediction decoding
🔎 Similar Papers
No similar papers found.
Sagar Parekh
Sagar Parekh
Grad Student, Virginia Tech
roboticshuman-robot interactionreinforcement learning
Lauren Bramblett
Lauren Bramblett
Autonomous Mobile Robots Lab (AMR Lab), Dept. of Systems & Information Engineering, Dept. of Electrical & Computer Engineering, University of Virginia, Charlottesville, VA 22903
N
N. Bezzo
Autonomous Mobile Robots Lab (AMR Lab), Dept. of Systems & Information Engineering, Dept. of Electrical & Computer Engineering, University of Virginia, Charlottesville, VA 22903
D
Dylan P. Losey
Collaborative Robotics Lab (Collab), Dept. of Mechanical Engineering, Virginia Tech, Blacksburg, VA 24061