Robots that Suggest Safe Alternatives

📅 2024-09-15
🏛️ arXiv.org
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenge of enabling robots to simultaneously achieve user-specified goals and guarantee safe execution when confronted with unforeseen requests. We propose SALT, a Safety-Aware Alternative Generation framework. Its core innovation lies in the first-ever migration of safety filtering from the action space to the goal space, achieved via a goal-parameterized reach-avoid value network that jointly performs online safety filtering and semantic-similarity-guided alternative goal search. By integrating offline reachability analysis with online inference, SALT proactively recommends semantically proximate alternatives that strictly satisfy safety constraints when the original goal is infeasible. Evaluated on navigation and robotic manipulation tasks, SALT demonstrates reduced conservatism compared to open-loop uncertainty estimation, achieves significantly higher acceptance rates for recommended alternatives, and exhibits strong alignment with human preferences.

Technology Category

Application Category

📝 Abstract
Goal-conditioned policies, such as those learned via imitation learning, provide an easy way for humans to influence what tasks robots accomplish. However, these robot policies are not guaranteed to execute safely or to succeed when faced with out-of-distribution requests. In this work, we enable robots to know when they can confidently execute a user's desired goal, and automatically suggest safe alternatives when they cannot. Our approach is inspired by control-theoretic safety filtering, wherein a safety filter minimally adjusts a robot's candidate action to be safe. Our key idea is to pose alternative suggestion as a safe control problem in goal space, rather than in action space. Offline, we use reachability analysis to compute a goal-parameterized reach-avoid value network which quantifies the safety and liveness of the robot's pre-trained policy. Online, our robot uses the reach-avoid value network as a safety filter, monitoring the human's given goal and actively suggesting alternatives that are similar but meet the safety specification. We demonstrate our Safe ALTernatives (SALT) framework in simulation experiments with indoor navigation and Franka Panda tabletop manipulation, and with both discrete and continuous goal representations. We find that SALT is able to learn to predict successful and failed closed-loop executions, is a less pessimistic monitor than open-loop uncertainty quantification, and proposes alternatives that consistently align with those people find acceptable.
Problem

Research questions and friction points this paper is trying to address.

Robots suggest safe alternatives for unsafe user goals.
Goal-parameterized reach-avoid value network ensures safety.
Framework tested in navigation and manipulation simulations.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Goal-parameterized reach-avoid value network
Safety filter in goal space
Offline reachability analysis for safety
🔎 Similar Papers
No similar papers found.
Hyun Joe Jeong
Hyun Joe Jeong
PhD Student, Carnegie Mellon University
RoboticsAI SafetyWorld Models
A
Andrea V. Bajcsy
Robotics Institute, Carnegie Mellon University