🤖 AI Summary
This paper investigates the “specifiability boundary” in sensor-driven robotic planning—i.e., which tasks admit formal specification and implementation. Addressing how foundational semantic choices (e.g., states, actions, observations, knowledge) affect specifiability, the work elevates grounding—the mapping of logical constructs to physical or epistemic primitives—to a first-class design dimension. We introduce a unified symbolic framework integrating modal logic modeling, formal specification semantics, and realizability theory to systematically characterize existence conditions for task-specifiability under varying grounding combinations. Our analysis reveals that several canonical temporal behavioral tasks are specifiable only under specific grounding configurations. Consequently, we establish precise theoretical limits on the expressive and realizable power of specifications in sensor-based planning. These results provide foundational guidance for the principled design of specification-driven robotic systems.
📝 Abstract
There is now a large body of techniques, many based on formal methods, for describing and realizing complex robotics tasks, including those involving a variety of rich goals and time-extended behavior. This paper explores the limits of what sorts of tasks are specifiable, examining how the precise grounding of specifications, that is, whether the specification is given in terms of the robot's states, its actions and observations, its knowledge, or some other information,is crucial to whether a given task can be specified. While prior work included some description of particular choices for this grounding, our contribution treats this aspect as a first-class citizen: we introduce notation to deal with a large class of problems, and examine how the grounding affects what tasks can be posed. The results demonstrate that certain classes of tasks are specifiable under different combinations of groundings.