🤖 AI Summary
Traditional safety shields in reinforcement learning are static, relying on fixed environmental assumptions and manual abstractions; they fail catastrophically when these assumptions are violated.
Method: We propose the first adaptive shielding framework for GR(1) specifications, integrating runtime assumption monitoring with inductive logic programming (ILP) to explainably repair GR(1) specifications online. Our approach selectively weakens objectives—only as needed—to preserve both safety and liveness, enabling graceful evolution. It unifies linear temporal logic (LTL), GR(1) synthesis, and runtime assumption checking to synthesize evolvable symbolic controllers.
Results: Evaluated on Minepump and Atari Seaquest, our adaptive shielded agent achieves 100% logical compliance while attaining reward performance near the optimal policy—substantially outperforming static shielding baselines. The framework ensures robustness under assumption violations without compromising formal guarantees.
📝 Abstract
Shielding is widely used to enforce safety in reinforcement learning (RL), ensuring that an agent's actions remain compliant with formal specifications. Classical shielding approaches, however, are often static, in the sense that they assume fixed logical specifications and hand-crafted abstractions. While these static shields provide safety under nominal assumptions, they fail to adapt when environment assumptions are violated. In this paper, we develop the first adaptive shielding framework - to the best of our knowledge - based on Generalized Reactivity of rank 1 (GR(1)) specifications, a tractable and expressive fragment of Linear Temporal Logic (LTL) that captures both safety and liveness properties. Our method detects environment assumption violations at runtime and employs Inductive Logic Programming (ILP) to automatically repair GR(1) specifications online, in a systematic and interpretable way. This ensures that the shield evolves gracefully, ensuring liveness is achievable and weakening goals only when necessary. We consider two case studies: Minepump and Atari Seaquest; showing that (i) static symbolic controllers are often severely suboptimal when optimizing for auxiliary rewards, and (ii) RL agents equipped with our adaptive shield maintain near-optimal reward and perfect logical compliance compared with static shields.