🤖 AI Summary
This paper addresses the trade-off between goal conflicts and autonomy in human-robot collaboration under infinite-horizon temporal logic tasks. We propose a “maximal adaptation, minimal guidance” logical interaction framework that integrates online policy adaptation, tunable feedback mechanisms, and temporal logic planning. The framework enables robots to adapt in real time to unknown human goal behaviors while requesting human assistance only when necessary—and in a minimally intrusive manner. To our knowledge, this is the first approach to simultaneously guarantee both persistent satisfaction of robot task specifications and preservation of human behavioral freedom, thereby supporting emergent collaboration. Experiments on a Franka Panda robotic arm and the Overcooked-AI platform demonstrate superior collaboration naturalness and task completion robustness compared to state-of-the-art methods.
📝 Abstract
We present a novel framework for human-robot emph{logical} interaction that enables robots to reliably satisfy (infinite horizon) temporal logic tasks while effectively collaborating with humans who pursue independent and unknown tasks. The framework combines two key capabilities: (i) emph{maximal adaptation} enables the robot to adjust its strategy emph{online} to exploit human behavior for cooperation whenever possible, and (ii) emph{minimal tunable feedback} enables the robot to request cooperation by the human online only when necessary to guarantee progress. This balance minimizes human-robot interference, preserves human autonomy, and ensures persistent robot task satisfaction even under conflicting human goals. We validate the approach in a real-world block-manipulation task with a Franka Emika Panda robotic arm and in the Overcooked-AI benchmark, demonstrating that our method produces rich, emph{emergent} cooperative behaviors beyond the reach of existing approaches, while maintaining strong formal guarantees.