TeleAI-Safety: A comprehensive LLM jailbreaking benchmark towards attacks, defenses, and evaluations

📅 2025-12-05
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing LLM security evaluation frameworks suffer from fragmented attack/defense/assessment modules and a trade-off between flexibility and standardization, hindering cross-model comparison and quantitative risk analysis. This paper introduces the first modular, reproducible LLM security evaluation framework, unifying 19 attack methods, 29 defense techniques, and 19 assessment metrics across 12 risk categories and 342 standardized test cases. Leveraging customized adversarial prompts and multidimensional metrics—including jailbreak success rate, defense robustness, and security-utility trade-offs—the framework systematically identifies common vulnerabilities and defense failure patterns across 14 mainstream LLMs. All code, datasets, and evaluation results are publicly released, establishing a reproducible benchmark and analytical paradigm for LLM security research.

Technology Category

Application Category

📝 Abstract
While the deployment of large language models (LLMs) in high-value industries continues to expand, the systematic assessment of their safety against jailbreak and prompt-based attacks remains insufficient. Existing safety evaluation benchmarks and frameworks are often limited by an imbalanced integration of core components (attack, defense, and evaluation methods) and an isolation between flexible evaluation frameworks and standardized benchmarking capabilities. These limitations hinder reliable cross-study comparisons and create unnecessary overhead for comprehensive risk assessment. To address these gaps, we present TeleAI-Safety, a modular and reproducible framework coupled with a systematic benchmark for rigorous LLM safety evaluation. Our framework integrates a broad collection of 19 attack methods (including one self-developed method), 29 defense methods, and 19 evaluation methods (including one self-developed method). With a curated attack corpus of 342 samples spanning 12 distinct risk categories, the TeleAI-Safety benchmark conducts extensive evaluations across 14 target models. The results reveal systematic vulnerabilities and model-specific failure cases, highlighting critical trade-offs between safety and utility, and identifying potential defense patterns for future optimization. In practical scenarios, TeleAI-Safety can be flexibly adjusted with customized attack, defense, and evaluation combinations to meet specific demands. We release our complete code and evaluation results to facilitate reproducible research and establish unified safety baselines.
Problem

Research questions and friction points this paper is trying to address.

Addresses insufficient systematic safety assessment of LLMs against jailbreak attacks
Integrates diverse attack, defense, and evaluation methods for comprehensive risk analysis
Provides a modular framework to enable reproducible and customizable safety benchmarking
Innovation

Methods, ideas, or system contributions that make the work stand out.

Modular framework integrates attack, defense, evaluation methods
Benchmark uses curated corpus across risk categories and models
Flexible customization for practical safety and utility assessments
🔎 Similar Papers
No similar papers found.
X
Xiuyuan Chen
Institute of Artificial Intelligence (TeleAI) of China Telecom, Shanghai Jiao Tong University
J
Jian Zhao
Institute of Artificial Intelligence (TeleAI) of China Telecom
Y
Yuxiang He
Institute of Artificial Intelligence (TeleAI) of China Telecom, Sichuan University
Y
Yuan Xun
Institute of Artificial Intelligence (TeleAI) of China Telecom, University of Chinese Academy of Sciences
X
Xinwei Liu
Institute of Artificial Intelligence (TeleAI) of China Telecom, University of Chinese Academy of Sciences
Yanshu Li
Yanshu Li
Brown University
NLPMultimodal Learning
H
Huilin Zhou
Institute of Artificial Intelligence (TeleAI) of China Telecom, University of Science and Technology of China
W
Wei Cai
Institute of Artificial Intelligence (TeleAI) of China Telecom, Peking University
Z
Ziyan Shi
Institute of Artificial Intelligence (TeleAI) of China Telecom, Harbin Institute of Technology
Y
Yuchen Yuan
Institute of Artificial Intelligence (TeleAI) of China Telecom
T
Tianle Zhang
Institute of Artificial Intelligence (TeleAI) of China Telecom
C
Chi Zhang
Institute of Artificial Intelligence (TeleAI) of China Telecom
X
Xuelong Li
Institute of Artificial Intelligence (TeleAI) of China Telecom