Controllable Safety Alignment: Inference-Time Adaptation to Diverse Safety Requirements

📅 2024-10-11
🏛️ arXiv.org
📈 Citations: 1
Influential: 0
📄 PDF
🤖 AI Summary
Current LLM safety alignment relies on uniform, static criteria, failing to accommodate cross-cultural, role-specific, and user-personalized safety requirements while incurring high retraining costs. To address this, we propose CoSA—a configurable safety alignment framework enabling dynamic, inference-time behavioral control via natural-language safety directives (e.g., “allow medical consultation but prohibit legal advice”) without model retraining. Methodologically, CoSA introduces: (i) CoSAlign, a data-driven alignment strategy; (ii) CoSA-Score, an evaluation protocol jointly optimizing utility and safety; and (iii) CoSApien, a realistic, multi-scenario benchmark covering diverse safety dimensions. Experiments demonstrate that CoSA significantly outperforms strong baselines—including in-context learning—in controllability, enabling fine-grained, real-time, multi-dimensional safety policy switching. CoSA establishes the first lightweight, flexible, and interpretable paradigm for on-demand safety alignment of LLMs.

Technology Category

Application Category

📝 Abstract
The current paradigm for safety alignment of large language models (LLMs) follows a one-size-fits-all approach: the model refuses to interact with any content deemed unsafe by the model provider. This approach lacks flexibility in the face of varying social norms across cultures and regions. In addition, users may have diverse safety needs, making a model with static safety standards too restrictive to be useful, as well as too costly to be re-aligned. We propose Controllable Safety Alignment (CoSA), a framework designed to adapt models to diverse safety requirements without re-training. Instead of aligning a fixed model, we align models to follow safety configs -- free-form natural language descriptions of the desired safety behaviors -- that are provided as part of the system prompt. To adjust model safety behavior, authorized users only need to modify such safety configs at inference time. To enable that, we propose CoSAlign, a data-centric method for aligning LLMs to easily adapt to diverse safety configs. Furthermore, we devise a novel controllability evaluation protocol that considers both helpfulness and configured safety, summarizing them into CoSA-Score, and construct CoSApien, a human-authored benchmark that consists of real-world LLM use cases with diverse safety requirements and corresponding evaluation prompts. We show that CoSAlign leads to substantial gains of controllability over strong baselines including in-context alignment. Our framework encourages better representation and adaptation to pluralistic human values in LLMs, and thereby increasing their practicality.
Problem

Research questions and friction points this paper is trying to address.

Adapting LLMs to diverse safety requirements without retraining.
Enabling dynamic safety behavior adjustment via natural language configs.
Improving controllability and practicality of LLMs for pluralistic human values.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Adapts LLMs to diverse safety requirements without retraining.
Uses natural language safety configs for inference-time adjustments.
Introduces CoSA-Score for evaluating safety and helpfulness.
🔎 Similar Papers
No similar papers found.