🤖 AI Summary
This paper addresses offline safe imitation learning: learning risk-averse policies solely from non-preferential trajectories—demonstrations lacking risk labels, explicit rewards, or cost signals. To this end, we propose SafeMIL, the first method to introduce multi-instance learning (MIL) into this setting. SafeMIL models the latent trajectory-level risk structure to construct a differentiable state-action risk cost function, and jointly optimizes it within an end-to-end framework unifying offline reinforcement learning and imitation learning. Unlike baseline methods relying on preference annotations, online interaction, or explicit safety constraints, SafeMIL achieves substantial safety improvements—reducing risk events by 32%–68% across multiple simulated environments—while preserving near-optimal reward performance. These results demonstrate the effectiveness and generalizability of implicitly extracting safety knowledge from weakly supervised, non-preferential data.
📝 Abstract
In this work, we study the problem of offline safe imitation learning (IL). In many real-world settings, online interactions can be risky, and accurately specifying the reward and the safety cost information at each timestep can be difficult. However, it is often feasible to collect trajectories reflecting undesirable or risky behavior, implicitly conveying the behavior the agent should avoid. We refer to these trajectories as non-preferred trajectories. Unlike standard IL, which aims to mimic demonstrations, our agent must also learn to avoid risky behavior using non-preferred trajectories. In this paper, we propose a novel approach, SafeMIL, to learn a parameterized cost that predicts if the state-action pair is risky via Multiple Instance Learning. The learned cost is then used to avoid non-preferred behaviors, resulting in a policy that prioritizes safety. We empirically demonstrate that our approach can learn a safer policy that satisfies cost constraints without degrading the reward performance, thereby outperforming several baselines.