Mitigating the Safety-utility Trade-off in LLM Alignment via Adaptive Safe Context Learning

๐Ÿ“… 2026-02-14
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
This work addresses the trade-off between safety and utility in large language models, which often arises when safety rules are rigidly coupled with refusal behaviors during alignment, impairing reasoning capabilities. To overcome this limitation, the authors propose Adaptive Safety Context Learning (ASCL), a framework that formulates safety alignment as a multi-turn tool-use process, enabling the model to autonomously decide when to retrieve safety rules and conduct reasoned responses. ASCL innovatively decouples rule retrieval from response generation and introduces Inverse-Frequency Policy Optimization (IFPO) to correct advantage estimation bias in reinforcement learning, thereby mitigating over-reliance on safety rules. Experimental results demonstrate that ASCL significantly enhances reasoning performance while maintaining strong safety guarantees, effectively alleviating the safetyโ€“utility trade-off.

Technology Category

Application Category

๐Ÿ“ Abstract
While reasoning models have achieved remarkable success in complex reasoning tasks, their increasing power necessitates stringent safety measures. For safety alignment, the core challenge lies in the inherent trade-off between safety and utility. However, prevailing alignment strategies typically construct CoT training data with explicit safety rules via context distillation. This approach inadvertently limits reasoning capabilities by creating a rigid association between rule memorization and refusal. To mitigate the safety-utility trade-off, we propose the Adaptive Safe Context Learning (ASCL) framework to improve the reasoning given proper context. ASCL formulates safety alignment as a multi-turn tool-use process, empowering the model to autonomously decide when to consult safety rules and how to generate the ongoing reasoning. Furthermore, to counteract the preference for rule consultation during RL, we introduce Inverse Frequency Policy Optimization (IFPO) to rebalance advantage estimates. By decoupling rule retrieval and subsequent reasoning, our method achieves higher overall performance compared to baselines.
Problem

Research questions and friction points this paper is trying to address.

safety-utility trade-off
LLM alignment
reasoning capability
safety alignment
context learning
Innovation

Methods, ideas, or system contributions that make the work stand out.

Adaptive Safe Context Learning
Safety-utility Trade-off
Multi-turn Tool-use
Inverse Frequency Policy Optimization
LLM Alignment
๐Ÿ”Ž Similar Papers
No similar papers found.
Y
Yanbo Wang
School of Artificial Intelligence, University of Chinese Academy of Sciences; NLPR & MAIS, Institute of Automation, Chinese Academy of Sciences
Minzheng Wang
Minzheng Wang
Institute of Automation, Chinese Academy of Sciences
Large Language ModelsNatural Language Processing
Jian Liang
Jian Liang
Kuaishou Inc.
transfer learninggraph learning
L
Lu Wang
Ritzz-AI
Yongcan Yu
Yongcan Yu
Master Student, CASIA
Trustworthy AISafety in AI
R
Ran He
School of Artificial Intelligence, University of Chinese Academy of Sciences; NLPR & MAIS, Institute of Automation, Chinese Academy of Sciences