🤖 AI Summary
This work addresses the challenge of ensuring both safety and sample-efficient adaptation during test-time task learning in meta-reinforcement learning. It proposes the first constrained meta-RL algorithm that simultaneously guarantees provable safety at test time and achieves near-optimal sample complexity. The approach learns a general-purpose prior during meta-training and refines the policy under explicit safety constraints during test-time adaptation. Theoretical analysis demonstrates that the method converges to a near-optimal policy with a sample complexity that matches the established lower bound, while rigorously satisfying safety requirements throughout the learning process.
📝 Abstract
Meta reinforcement learning (RL) allows agents to leverage experience across a distribution of tasks on which the agent can train at will, enabling faster learning of optimal policies on new test tasks. Despite its success in improving sample complexity on test tasks, many real-world applications, such as robotics and healthcare, impose safety constraints during testing. Constrained meta RL provides a promising framework for integrating safety into meta RL. An open question in constrained meta RL is how to ensure the safety of the policy on the real-world test task, while reducing the sample complexity and thus, enabling faster learning of optimal policies. To address this gap, we propose an algorithm that refines policies learned during training, with provable safety and sample complexity guarantees for learning a near optimal policy on the test tasks. We further derive a matching lower bound, showing that this sample complexity is tight.