Efficient Policy Evaluation with Safety Constraint for Reinforcement Learning

📅 2024-10-08
🏛️ arXiv.org
📈 Citations: 1
Influential: 0
📄 PDF
🤖 AI Summary
To address the dual challenges of high variance and online execution insecurity in on-policy policy evaluation within reinforcement learning, this paper proposes a safety-constrained optimal behavior policy design method that minimizes the variance of importance sampling (IS) estimators. Unlike existing approaches, our method is the first to strictly enforce state-action-level safety constraints without introducing bias. Leveraging constrained optimization theory and IS fundamentals, we develop a safety-aware policy search framework that integrates Lagrangian duality with gradient projection. Experiments demonstrate that our approach significantly reduces estimation variance—substantially outperforming classical on-policy methods—while guaranteeing strong safety compliance. To the best of our knowledge, it is the only method achieving both stringent safety assurance and substantial variance reduction simultaneously, thereby surpassing state-of-the-art techniques in both estimation accuracy and safety robustness.

Technology Category

Application Category

📝 Abstract
In reinforcement learning, classic on-policy evaluation methods often suffer from high variance and require massive online data to attain the desired accuracy. Previous studies attempt to reduce evaluation variance by searching for or designing proper behavior policies to collect data. However, these approaches ignore the safety of such behavior policies -- the designed behavior policies have no safety guarantee and may lead to severe damage during online executions. In this paper, to address the challenge of reducing variance while ensuring safety simultaneously, we propose an optimal variance-minimizing behavior policy under safety constraints. Theoretically, while ensuring safety constraints, our evaluation method is unbiased and has lower variance than on-policy evaluation. Empirically, our method is the only existing method to achieve both substantial variance reduction and safety constraint satisfaction. Furthermore, we show our method is even superior to previous methods in both variance reduction and execution safety.
Problem

Research questions and friction points this paper is trying to address.

Reducing variance in reinforcement learning policy evaluation
Ensuring safety constraints during online policy execution
Designing optimal behavior policies for safe data collection
Innovation

Methods, ideas, or system contributions that make the work stand out.

Optimal variance-minimizing behavior policy design
Safety constraint satisfaction during evaluation
Unbiased lower-variance alternative to on-policy evaluation
🔎 Similar Papers
No similar papers found.