🤖 AI Summary
Large language models (LLMs) face a fundamental trade-off between safety and utility, remaining highly vulnerable to adversarial jailbreaking attacks. To address this, we propose CS-RLHF—a novel constrained reinforcement learning from human feedback framework that replaces the unstable Lagrange multiplier–based optimization of conventional constrained Markov decision processes (CMDPs) with a semantics-driven cost model coupled with a fixed-penalty mechanism. This approach eliminates dual variable updates and guarantees theoretically verifiable feasibility of safety constraints at all optimization steps. We further introduce a semantic safety scorer, pretrained on large-scale corpora, enabling fine-grained, interpretable, and empirically verifiable safety control. Experiments demonstrate that CS-RLHF achieves ≥5× higher response efficiency over state-of-the-art methods while significantly improving both safety and robustness under both standard and jailbreaking prompts.
📝 Abstract
Ensuring safety is a foundational requirement for large language models (LLMs). Achieving an appropriate balance between enhancing the utility of model outputs and mitigating their potential for harm is a complex and persistent challenge. Contemporary approaches frequently formalize this problem within the framework of Constrained Markov Decision Processes (CMDPs) and employ established CMDP optimization techniques. However, these methods exhibit two notable limitations. First, their reliance on reward and cost functions renders performance highly sensitive to the underlying scoring mechanism, which must capture semantic meaning rather than being triggered by superficial keywords. Second, CMDP-based training entails tuning dual-variable, a process that is both computationally expensive and does not provide any provable safety guarantee for a fixed dual variable that can be exploitable through adversarial jailbreaks. To overcome these limitations, we introduce Certifiable Safe-RLHF (CS-RLHF) that introduces a cost model trained on a large-scale corpus to assign semantically grounded safety scores. In contrast to the lagrangian-based approach, CS-RLHF adopts a rectified penalty-based formulation. This design draws on the theory of exact penalty functions in constrained optimization, wherein constraint satisfaction is enforced directly through a suitably chosen penalty term. With an appropriately scaled penalty, feasibility of the safety constraints can be guaranteed at the optimizer, eliminating the need for dual-variable updates. Empirical evaluation demonstrates that CS-RLHF outperforms state-of-the-art LLM model responses rendering at-least 5 times efficient against nominal and jail-breaking prompts