LongSafety: Evaluating Long-Context Safety of Large Language Models

📅 2025-02-24
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Security evaluation of large language models (LLMs) in long-context scenarios remains critically underexplored. Method: We introduce LongSafe, the first open-ended long-context safety benchmark, covering seven safety risk categories and six user task types, with 1,543 test cases averaging 5,424 tokens. We systematically uncover unique vulnerabilities induced by long contexts—demonstrating non-transferability of short-context safety performance and significant risk amplification from relevance interference and sequence expansion. Our methodology integrates human-crafted + rule-augmented data construction, a multi-dimensional human verification protocol, and a cross-model/task safety-rate quantification framework. Contribution/Results: Evaluations across 16 state-of-the-art LLMs reveal that most achieve <55% long-context safety rates; we precisely identify high-risk task types and safety categories. All data and code are publicly released.

Technology Category

Application Category

📝 Abstract
As Large Language Models (LLMs) continue to advance in understanding and generating long sequences, new safety concerns have been introduced through the long context. However, the safety of LLMs in long-context tasks remains under-explored, leaving a significant gap in both evaluation and improvement of their safety. To address this, we introduce LongSafety, the first comprehensive benchmark specifically designed to evaluate LLM safety in open-ended long-context tasks. LongSafety encompasses 7 categories of safety issues and 6 user-oriented long-context tasks, with a total of 1,543 test cases, averaging 5,424 words per context. Our evaluation towards 16 representative LLMs reveals significant safety vulnerabilities, with most models achieving safety rates below 55%. Our findings also indicate that strong safety performance in short-context scenarios does not necessarily correlate with safety in long-context tasks, emphasizing the unique challenges and urgency of improving long-context safety. Moreover, through extensive analysis, we identify challenging safety issues and task types for long-context models. Furthermore, we find that relevant context and extended input sequences can exacerbate safety risks in long-context scenarios, highlighting the critical need for ongoing attention to long-context safety challenges. Our code and data are available at https://github.com/thu-coai/LongSafety.
Problem

Research questions and friction points this paper is trying to address.

Evaluates long-context safety in LLMs
Identifies significant safety vulnerabilities
Highlights unique long-context safety challenges
Innovation

Methods, ideas, or system contributions that make the work stand out.

Comprehensive benchmark for LLM safety
Evaluates 7 safety categories, 6 tasks
Identifies long-context specific safety vulnerabilities
🔎 Similar Papers
No similar papers found.