AgentAlign: Navigating Safety Alignment in the Shift from Informative to Agentic Large Language Models

📅 2025-05-29
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Large foundation models for embodied agents suffer from susceptibility to misuse and lack of safety alignment during post-training. Method: This paper proposes an Abstract Action Chain (AAC)-based data synthesis framework that models high-level behavioral sequences and instantiates tool invocations in simulation environments, enabling semantic decoupling and co-generation of malicious and benign instructions while supporting controllable, multi-step balancing of harmfulness and usefulness for the first time. Contribution/Results: Evaluated on the AgentHarm benchmark, our approach elevates safety rates of three open-source models from 35.8% to 79.5%, without compromising—indeed often improving—task completion performance. It significantly outperforms prompt-engineering baselines. We publicly release a high-quality safety-aligned dataset and implementation code.

Technology Category

Application Category

📝 Abstract
The acquisition of agentic capabilities has transformed LLMs from"knowledge providers"to"action executors", a trend that while expanding LLMs' capability boundaries, significantly increases their susceptibility to malicious use. Previous work has shown that current LLM-based agents execute numerous malicious tasks even without being attacked, indicating a deficiency in agentic use safety alignment during the post-training phase. To address this gap, we propose AgentAlign, a novel framework that leverages abstract behavior chains as a medium for safety alignment data synthesis. By instantiating these behavior chains in simulated environments with diverse tool instances, our framework enables the generation of highly authentic and executable instructions while capturing complex multi-step dynamics. The framework further ensures model utility by proportionally synthesizing benign instructions through non-malicious interpretations of behavior chains, precisely calibrating the boundary between helpfulness and harmlessness. Evaluation results on AgentHarm demonstrate that fine-tuning three families of open-source models using our method substantially improves their safety (35.8% to 79.5% improvement) while minimally impacting or even positively enhancing their helpfulness, outperforming various prompting methods. The dataset and code have both been open-sourced.
Problem

Research questions and friction points this paper is trying to address.

Addressing safety risks in agentic LLMs from malicious use
Improving post-training safety alignment for agentic LLM behaviors
Balancing model safety and utility in multi-step agentic tasks
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses abstract behavior chains for safety alignment
Simulates diverse tool instances for authentic instructions
Balances benign and harmful instruction synthesis
🔎 Similar Papers
No similar papers found.
Jinchuan Zhang
Jinchuan Zhang
University of Electronic Science and Technology of China
Temporal Knowledge GraphGraph Representation Learning
L
Lu Yin
Institute of Information Engineering, Chinese Academy of Sciences; School of Cyber Security, University of Chinese Academy of Sciences
Y
Yan Zhou
Institute of Information Engineering, Chinese Academy of Sciences
S
Songlin Hu
Institute of Information Engineering, Chinese Academy of Sciences; School of Cyber Security, University of Chinese Academy of Sciences