OSIL: Learning Offline Safe Imitation Policies with Safety Inferred from Non-preferred Trajectories

📅 2026-02-11
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenge of learning policies that simultaneously achieve high performance and satisfy safety requirements in offline imitation learning, where trajectories may contain unsafe behaviors and per-timestep safety cost annotations are unavailable. The problem is formulated as a constrained Markov decision process (CMDP), and the paper introduces a novel framework that implicitly infers safety constraints from non-preference trajectories, eliminating the need for explicit safety labels. By jointly optimizing a lower bound on the reward and learning a safety cost model, the proposed method successfully learns policies that adhere to safety constraints while achieving strong task performance across multiple benchmarks, significantly outperforming existing baselines.

Technology Category

Application Category

📝 Abstract
This work addresses the problem of offline safe imitation learning (IL), where the goal is to learn safe and reward-maximizing policies from demonstrations that do not have per-timestep safety cost or reward information. In many real-world domains, online learning in the environment can be risky, and specifying accurate safety costs can be difficult. However, it is often feasible to collect trajectories that reflect undesirable or unsafe behavior, implicitly conveying what the agent should avoid. We refer to these as non-preferred trajectories. We propose a novel offline safe IL algorithm, OSIL, that infers safety from non-preferred demonstrations. We formulate safe policy learning as a Constrained Markov Decision Process (CMDP). Instead of relying on explicit safety cost and reward annotations, OSIL reformulates the CMDP problem by deriving a lower bound on reward maximizing objective and learning a cost model that estimates the likelihood of non-preferred behavior. Our approach allows agents to learn safe and reward-maximizing behavior entirely from offline demonstrations. We empirically demonstrate that our approach can learn safer policies that satisfy cost constraints without degrading the reward performance, thus outperforming several baselines.
Problem

Research questions and friction points this paper is trying to address.

offline imitation learning
safe reinforcement learning
non-preferred trajectories
constrained policy learning
safety inference
Innovation

Methods, ideas, or system contributions that make the work stand out.

offline imitation learning
safe reinforcement learning
non-preferred trajectories
constrained MDP
cost modeling
🔎 Similar Papers
No similar papers found.
R
Returaj Burnwal
Indian Institute of Technology Madras
N
Nirav Pravinbhai Bhatt
Indian Institute of Technology Madras
Balaraman Ravindran
Balaraman Ravindran
Professor of Data Science and AI, Wadhwani School of Data Science and AI, IIT Madras
Reinforcement LearningData MiningNetwork AnalysisResponsible AI