RADAR: A Risk-Aware Dynamic Multi-Agent Framework for LLM Safety Evaluation via Role-Specialized Collaboration

📅 2025-09-28
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing LLM safety evaluation methods suffer from evaluator bias and model homogeneity, compromising the robustness of risk identification. To address this, we propose a risk-aware dynamic multi-agent framework: first, we decouple the risk conceptual space into three disjoint subspaces—explicit risk, implicit risk, and non-risk; second, we introduce a four-role specialization scheme, multi-round adversarial debate, and dynamic distribution updating to enable self-evolving assessment and systematic bias mitigation. Integrating formal theoretical modeling with collaborative reasoning, our framework significantly improves coverage and sensitivity. Evaluated on 800 high-difficulty test cases and public benchmarks, it achieves up to a 28.87% absolute gain in risk identification accuracy over state-of-the-art methods, while also demonstrating superior stability and fine-grained risk discrimination capability.

Technology Category

Application Category

📝 Abstract
Existing safety evaluation methods for large language models (LLMs) suffer from inherent limitations, including evaluator bias and detection failures arising from model homogeneity, which collectively undermine the robustness of risk evaluation processes. This paper seeks to re-examine the risk evaluation paradigm by introducing a theoretical framework that reconstructs the underlying risk concept space. Specifically, we decompose the latent risk concept space into three mutually exclusive subspaces: the explicit risk subspace (encompassing direct violations of safety guidelines), the implicit risk subspace (capturing potential malicious content that requires contextual reasoning for identification), and the non-risk subspace. Furthermore, we propose RADAR, a multi-agent collaborative evaluation framework that leverages multi-round debate mechanisms through four specialized complementary roles and employs dynamic update mechanisms to achieve self-evolution of risk concept distributions. This approach enables comprehensive coverage of both explicit and implicit risks while mitigating evaluator bias. To validate the effectiveness of our framework, we construct an evaluation dataset comprising 800 challenging cases. Extensive experiments on our challenging testset and public benchmarks demonstrate that RADAR significantly outperforms baseline evaluation methods across multiple dimensions, including accuracy, stability, and self-evaluation risk sensitivity. Notably, RADAR achieves a 28.87% improvement in risk identification accuracy compared to the strongest baseline evaluation method.
Problem

Research questions and friction points this paper is trying to address.

Addresses limitations in LLM safety evaluation methods
Decomposes risk concept space into explicit and implicit subspaces
Proposes multi-agent framework to mitigate evaluator bias
Innovation

Methods, ideas, or system contributions that make the work stand out.

Multi-agent framework with specialized roles for safety
Dynamic update mechanisms for evolving risk concepts
Three-part risk subspace decomposition for comprehensive coverage
🔎 Similar Papers
No similar papers found.
X
Xiuyuan Chen
School of Computer Science, Shanghai Jiao Tong University
J
Jian Zhao
Institute of Artificial Intelligence (TeleAI), China Telecom
Y
Yuchen Yuan
Institute of Artificial Intelligence (TeleAI), China Telecom
T
Tianle Zhang
Institute of Artificial Intelligence (TeleAI), China Telecom
H
Huilin Zhou
University of Science and Technology of China
Z
Zheng Zhu
GigaAI
Ping Hu
Ping Hu
UESTC
Computer VisionDeep LearningImage/Video Processing
Linghe Kong
Linghe Kong
Shanghai Jiao Tong University
Internet of ThingsMobile computingBig data
C
Chi Zhang
Institute of Artificial Intelligence (TeleAI), China Telecom
W
Weiran Huang
School of Computer Science, Shanghai Jiao Tong University
X
Xuelong Li
Institute of Artificial Intelligence (TeleAI), China Telecom