🤖 AI Summary
This work addresses the challenge of learning policies that simultaneously achieve high performance and satisfy safety requirements in offline imitation learning, where trajectories may contain unsafe behaviors and per-timestep safety cost annotations are unavailable. The problem is formulated as a constrained Markov decision process (CMDP), and the paper introduces a novel framework that implicitly infers safety constraints from non-preference trajectories, eliminating the need for explicit safety labels. By jointly optimizing a lower bound on the reward and learning a safety cost model, the proposed method successfully learns policies that adhere to safety constraints while achieving strong task performance across multiple benchmarks, significantly outperforming existing baselines.
📝 Abstract
This work addresses the problem of offline safe imitation learning (IL), where the goal is to learn safe and reward-maximizing policies from demonstrations that do not have per-timestep safety cost or reward information. In many real-world domains, online learning in the environment can be risky, and specifying accurate safety costs can be difficult. However, it is often feasible to collect trajectories that reflect undesirable or unsafe behavior, implicitly conveying what the agent should avoid. We refer to these as non-preferred trajectories. We propose a novel offline safe IL algorithm, OSIL, that infers safety from non-preferred demonstrations. We formulate safe policy learning as a Constrained Markov Decision Process (CMDP). Instead of relying on explicit safety cost and reward annotations, OSIL reformulates the CMDP problem by deriving a lower bound on reward maximizing objective and learning a cost model that estimates the likelihood of non-preferred behavior. Our approach allows agents to learn safe and reward-maximizing behavior entirely from offline demonstrations. We empirically demonstrate that our approach can learn safer policies that satisfy cost constraints without degrading the reward performance, thus outperforming several baselines.