TRIDENT: Enhancing Large Language Model Safety with Tri-Dimensional Diversified Red-Teaming Data Synthesis

📅 2025-05-30
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Large language models (LLMs) are prone to generating harmful content or being maliciously exploited; existing safety alignment datasets suffer from narrow risk coverage—primarily emphasizing lexical diversity while neglecting critical dimensions such as malicious intent and jailbreaking strategies. Method: We propose the first red-teaming data synthesis framework spanning three orthogonal dimensions: lexical diversity, malicious intent, and jailbreaking strategies. It introduces the first systematic, quantitative evaluation of risk coverage in alignment datasets and designs a role-playing–based zero-shot automated red-teaming paradigm to generate harmful instructions paired with ethically aligned responses. Contribution/Results: Experiments on Llama 3.1-8B show that supervised fine-tuning using our synthesized data reduces average harm scores by 14.29% and attack success rates by 20%, significantly outperforming the WildBreak baseline.

Technology Category

Application Category

📝 Abstract
Large Language Models (LLMs) excel in various natural language processing tasks but remain vulnerable to generating harmful content or being exploited for malicious purposes. Although safety alignment datasets have been introduced to mitigate such risks through supervised fine-tuning (SFT), these datasets often lack comprehensive risk coverage. Most existing datasets focus primarily on lexical diversity while neglecting other critical dimensions. To address this limitation, we propose a novel analysis framework to systematically measure the risk coverage of alignment datasets across three essential dimensions: Lexical Diversity, Malicious Intent, and Jailbreak Tactics. We further introduce TRIDENT, an automated pipeline that leverages persona-based, zero-shot LLM generation to produce diverse and comprehensive instructions spanning these dimensions. Each harmful instruction is paired with an ethically aligned response, resulting in two datasets: TRIDENT-Core, comprising 26,311 examples, and TRIDENT-Edge, with 18,773 examples. Fine-tuning Llama 3.1-8B on TRIDENT-Edge demonstrates substantial improvements, achieving an average 14.29% reduction in Harm Score, and a 20% decrease in Attack Success Rate compared to the best-performing baseline model fine-tuned on the WildBreak dataset.
Problem

Research questions and friction points this paper is trying to address.

Enhancing LLM safety by addressing harmful content generation
Improving risk coverage in safety alignment datasets
Automating diverse red-teaming data synthesis across key dimensions
Innovation

Methods, ideas, or system contributions that make the work stand out.

Tri-dimensional analysis for risk coverage
Automated pipeline for diverse instruction generation
Persona-based zero-shot LLM generation
🔎 Similar Papers
X
Xiaorui Wu
Key Laboratory of Aerospace Information Security and Trusted Computing, Ministry of Education, School of Cyber Science and Engineering, Wuhan University, Wuhan, China
Xiaofeng Mao
Xiaofeng Mao
Alibaba Group
Computer VisionAdversarial Machine Learning
F
Fei Li
Key Laboratory of Aerospace Information Security and Trusted Computing, Ministry of Education, School of Cyber Science and Engineering, Wuhan University, Wuhan, China
X
Xin Zhang
Ant Group
X
Xuanhong Li
Key Laboratory of Aerospace Information Security and Trusted Computing, Ministry of Education, School of Cyber Science and Engineering, Wuhan University, Wuhan, China
C
Chong Teng
Key Laboratory of Aerospace Information Security and Trusted Computing, Ministry of Education, School of Cyber Science and Engineering, Wuhan University, Wuhan, China
Donghong Ji
Donghong Ji
Wuhan University
Artificial IntelligenceNatural Language Processing
Z
Zhuang Li
School of Computing Technologies, Royal Melbourne Institute of Technology, Australia