Certifiable Safe RLHF: Fixed-Penalty Constraint Optimization for Safer Language Models

📅 2025-10-03
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Large language models (LLMs) face a fundamental trade-off between safety and utility, remaining highly vulnerable to adversarial jailbreaking attacks. To address this, we propose CS-RLHF—a novel constrained reinforcement learning from human feedback framework that replaces the unstable Lagrange multiplier–based optimization of conventional constrained Markov decision processes (CMDPs) with a semantics-driven cost model coupled with a fixed-penalty mechanism. This approach eliminates dual variable updates and guarantees theoretically verifiable feasibility of safety constraints at all optimization steps. We further introduce a semantic safety scorer, pretrained on large-scale corpora, enabling fine-grained, interpretable, and empirically verifiable safety control. Experiments demonstrate that CS-RLHF achieves ≥5× higher response efficiency over state-of-the-art methods while significantly improving both safety and robustness under both standard and jailbreaking prompts.

Technology Category

Application Category

📝 Abstract
Ensuring safety is a foundational requirement for large language models (LLMs). Achieving an appropriate balance between enhancing the utility of model outputs and mitigating their potential for harm is a complex and persistent challenge. Contemporary approaches frequently formalize this problem within the framework of Constrained Markov Decision Processes (CMDPs) and employ established CMDP optimization techniques. However, these methods exhibit two notable limitations. First, their reliance on reward and cost functions renders performance highly sensitive to the underlying scoring mechanism, which must capture semantic meaning rather than being triggered by superficial keywords. Second, CMDP-based training entails tuning dual-variable, a process that is both computationally expensive and does not provide any provable safety guarantee for a fixed dual variable that can be exploitable through adversarial jailbreaks. To overcome these limitations, we introduce Certifiable Safe-RLHF (CS-RLHF) that introduces a cost model trained on a large-scale corpus to assign semantically grounded safety scores. In contrast to the lagrangian-based approach, CS-RLHF adopts a rectified penalty-based formulation. This design draws on the theory of exact penalty functions in constrained optimization, wherein constraint satisfaction is enforced directly through a suitably chosen penalty term. With an appropriately scaled penalty, feasibility of the safety constraints can be guaranteed at the optimizer, eliminating the need for dual-variable updates. Empirical evaluation demonstrates that CS-RLHF outperforms state-of-the-art LLM model responses rendering at-least 5 times efficient against nominal and jail-breaking prompts
Problem

Research questions and friction points this paper is trying to address.

Balancing language model utility with safety mitigation challenges
Overcoming CMDP limitations in reward sensitivity and computational expense
Providing certifiable safety guarantees against adversarial jailbreak attacks
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses a cost model for semantic safety scoring
Adopts rectified penalty-based constraint optimization
Eliminates dual-variable updates for efficiency
🔎 Similar Papers
No similar papers found.
K
Kartik Pandit
Department of Electrical and Computer Engineering, New Jersey Institute of Technology, Newark, NJ 07102, USA
S
Sourav Ganguly
Department of Electrical and Computer Engineering, New Jersey Institute of Technology, Newark, NJ 07102, USA
A
Arnesh Banerjee
Department of Computer Engineering, Heritage Institute of Technology, Kolkata, India
Shaahin Angizi
Shaahin Angizi
Assistant Professor at New Jersey Institute of Technology
In-Memory ComputingIn-Sensor ComputingMemory SecurityAIDigital Design
Arnob Ghosh
Arnob Ghosh
Assistant Professor of ECE at New Jersey Institute of Technology
Reinforcement LearningGame thoeryIntelligent Transportation SystemComputer Networks