🤖 AI Summary
In human-robot cohabitation environments, robots lacking commonsense reasoning pose semantic-level safety hazards—e.g., placing a water cup above a laptop.
Method: We propose a Semantic Safety Filtering framework that, for the first time, integrates large language models’ (LLMs) contextual commonsense reasoning into a safety-certified closed loop, mapping human-specified semantic constraints (e.g., “liquid containers must not be suspended above fragile devices”) to verifiable control barrier functions (CBFs). Our approach jointly leverages 3D semantic scene reconstruction, LLM-driven constraint generation, CBF-based safety certification, and diffusion-policy fine-tuning.
Results: Evaluated in real kitchen settings for teleoperated and learning-based manipulation tasks, the framework reduces semantic violations significantly, increases safe action adoption by 37%, and achieves zero semantic safety incidents—surpassing conventional safety paradigms reliant solely on geometric collision checking.
📝 Abstract
Ensuring safe interactions in human-centric environments requires robots to understand and adhere to constraints recognized by humans as “common sense” (e.g., “moving a cup of water above a laptop is unsafe as the water may spill” or “rotating a cup of water is unsafe as it can lead to pouring its content”). Recent advances in computer vision and machine learning have enabled robots to acquire a semantic understanding of and reason about their operating environments. While extensive literature on safe robot decision-making exists, semantic understanding is rarely integrated into these formulations. In this work, we propose a semantic safety filter framework to certify robot inputs with respect to semantically defined constraints (e.g., unsafe spatial relationships, behaviors, and poses) and geometrically defined constraints (e.g., environment-collision and self-collision constraints). In our proposed approach, given perception inputs, we build a semantic map of the 3D environment and leverage the contextual reasoning capabilities of large language models to infer semantically unsafe conditions. These semantically unsafe conditions are then mapped to safe actions through a control barrier certification formulation. We demonstrate the proposed semantic safety filter in teleoperated manipulation tasks and with learned diffusion policies applied in a real-world kitchen environment that further showcases its effectiveness in addressing practical semantic safety constraints. Together, these experiments highlight our approach's capability to integrate semantics into safety certification, enabling safe robot operation beyond traditional collision avoidance.