SafeMIL: Learning Offline Safe Imitation Policy from Non-Preferred Trajectories

📅 2025-11-11
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This paper addresses offline safe imitation learning: learning risk-averse policies solely from non-preferential trajectories—demonstrations lacking risk labels, explicit rewards, or cost signals. To this end, we propose SafeMIL, the first method to introduce multi-instance learning (MIL) into this setting. SafeMIL models the latent trajectory-level risk structure to construct a differentiable state-action risk cost function, and jointly optimizes it within an end-to-end framework unifying offline reinforcement learning and imitation learning. Unlike baseline methods relying on preference annotations, online interaction, or explicit safety constraints, SafeMIL achieves substantial safety improvements—reducing risk events by 32%–68% across multiple simulated environments—while preserving near-optimal reward performance. These results demonstrate the effectiveness and generalizability of implicitly extracting safety knowledge from weakly supervised, non-preferential data.

Technology Category

Application Category

📝 Abstract
In this work, we study the problem of offline safe imitation learning (IL). In many real-world settings, online interactions can be risky, and accurately specifying the reward and the safety cost information at each timestep can be difficult. However, it is often feasible to collect trajectories reflecting undesirable or risky behavior, implicitly conveying the behavior the agent should avoid. We refer to these trajectories as non-preferred trajectories. Unlike standard IL, which aims to mimic demonstrations, our agent must also learn to avoid risky behavior using non-preferred trajectories. In this paper, we propose a novel approach, SafeMIL, to learn a parameterized cost that predicts if the state-action pair is risky via Multiple Instance Learning. The learned cost is then used to avoid non-preferred behaviors, resulting in a policy that prioritizes safety. We empirically demonstrate that our approach can learn a safer policy that satisfies cost constraints without degrading the reward performance, thereby outperforming several baselines.
Problem

Research questions and friction points this paper is trying to address.

Learning safe imitation policies from offline non-preferred trajectories
Avoiding risky behaviors using implicitly conveyed safety information
Satisfying cost constraints without degrading reward performance
Innovation

Methods, ideas, or system contributions that make the work stand out.

Learning safe policy from non-preferred trajectories
Using Multiple Instance Learning for risk prediction
Prioritizing safety while maintaining reward performance
🔎 Similar Papers
No similar papers found.
R
Returaj Burnwal
Department of Computer Science and Engineering, Indian Institute of Technology Madras, India
N
N. Bhatt
Department of Data Science and AI, Wadhwani School of Data Science and AI, Indian Institute of Technology Madras, India
Balaraman Ravindran
Balaraman Ravindran
Professor of Data Science and AI, Wadhwani School of Data Science and AI, IIT Madras
Reinforcement LearningData MiningNetwork AnalysisResponsible AI