Safe Reinforcement Learning with Preference-based Constraint Inference

📅 2026-03-24
📈 Citations: 0
Influential: 0
📄 PDF
📝 Abstract
Safe reinforcement learning (RL) is a standard paradigm for safety-critical decision making. However, real-world safety constraints can be complex, subjective, and even hard to explicitly specify. Existing works on constraint inference rely on restrictive assumptions or extensive expert demonstrations, which is not realistic in many real-world applications. How to cheaply and reliably learn these constraints is the major challenge we focus on in this study. While inferring constraints from human preferences offers a data-efficient alternative, we identify the popular Bradley-Terry (BT) models fail to capture the asymmetric, heavy-tailed nature of safety costs, resulting in risk underestimation. It is still rare in the literature to understand the impacts of BT models on the downstream policy learning. To address the above knowledge gaps, we propose a novel approach namely Preference-based Constrained Reinforcement Learning (PbCRL). We introduce a novel dead zone mechanism into preference modeling and theoretically prove that it encourages heavy-tailed cost distributions, thereby achieving better constraint alignment. Additionally, we incorporate a Signal-to-Noise Ratio (SNR) loss to encourage exploration by cost variances, which is found to benefit policy learning. Further, two-stage training strategy are deployed to lower online labeling burdens while adaptively enhancing constraint satisfaction. Empirical results demonstrate that PbCRL achieves superior alignment with true safety requirements and outperforms the state-of-the-art baselines in terms of safety and reward. Our work explores a promising and effective way for constraint inference in Safe RL, which has great potential in a range of safety-critical applications.
Problem

Research questions and friction points this paper is trying to address.

Safe Reinforcement Learning
Constraint Inference
Human Preferences
Heavy-tailed Costs
Safety Constraints
Innovation

Methods, ideas, or system contributions that make the work stand out.

Preference-based Constraint Inference
Dead Zone Mechanism
Heavy-tailed Cost Distribution
Signal-to-Noise Ratio Loss
Two-stage Training
🔎 Similar Papers
No similar papers found.
Chenglin Li
Chenglin Li
Professor, Department of Electronic Engineering, Shanghai Jiao Tong University
Multimedia communicationsadaptive video streamingdeep reinforcement learningdistributed and federated learning
G
Guangchun Ruan
Laboratory for Information & Decision Systems, Massachusetts Institute of Technology, Cambridge, MA, USA
H
Hua Geng
Department of Automation, Tsinghua University, Beijing, China