🤖 AI Summary
This study investigates how robotic capability level and action readability influence human feedback behavior in Learning from Corrections (LfC), challenging foundational assumptions about human-robot collaborative learning.
Method: A controlled user study with 60 participants performed pick-and-place tasks, and their corrective feedback was quantitatively analyzed along three dimensions: sensitivity (detection of errors), propensity (likelihood of issuing corrections), and precision (accuracy of corrections).
Contribution/Results: Contrary to conventional LfC assumptions, high-capability robots elicited redundant corrections, whereas low-capability robots suppressed necessary ones; action readability and predictability exerted qualitatively distinct effects on correction decisions; physical effort invested in correction strongly correlated with correction precision (p < 0.001); and a significant interaction emerged between capability and readability (p < 0.0075). This work provides the first empirical refutation of two core LfC assumptions, revealing a complex trade-off among capability, readability, effort, and precision—yielding critical cognitive insights and design principles for robust human-robot co-learning systems.
📝 Abstract
As robot deployments become more commonplace, people are likely to take on the role of supervising robots (i.e., correcting their mistakes) rather than directly teaching them. Prior works on Learning from Corrections (LfC) have relied on three key assumptions to interpret human feedback: (1) people correct the robot only when there is significant task objective divergence; (2) people can accurately predict if a correction is necessary; and (3) people trade off precision and physical effort when giving corrections. In this work, we study how two key factors (robot competency and motion legibility) affect how people provide correction feedback and their implications on these existing assumptions. We conduct a user study ($N=60$) under an LfC setting where participants supervise and correct a robot performing pick-and-place tasks. We find that people are more sensitive to suboptimal behavior by a highly competent robot compared to an incompetent robot when the motions are legible ($p=0.0015$) and predictable ($p=0.0055$). In addition, people also tend to withhold necessary corrections ($p<0.0001$) when supervising an incompetent robot and are more prone to offering unnecessary ones ($p = 0.0171$) when supervising a highly competent robot. We also find that physical effort positively correlates with correction precision, providing empirical evidence to support this common assumption. We also find that this correlation is significantly weaker for an incompetent robot with legible motions than an incompetent robot with predictable motions ($p = 0.0075$). Our findings offer insights for accounting for competency and legibility when designing robot interaction behaviors and learning task objectives from corrections.