Advancing LLM Safe Alignment with Safety Representation Ranking

📅 2025-05-21
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing safety evaluation methods for large language models (LLMs) rely solely on generated output text, overlooking fine-grained safety signals embedded in internal hidden states. Method: We propose SRR (Safety Ranking via Representations), a listwise ranking framework that shifts safety assessment to the intermediate-layer hidden state space of Transformers. SRR jointly encodes instruction-response pairs, applies listwise learning over candidate responses, and employs a lightweight similarity-based scoring mechanism—operating entirely on frozen hidden states without model fine-tuning. Contribution/Results: Evaluated on multiple adversarial safety benchmarks, SRR significantly improves robustness and achieves substantially higher safety selection accuracy than state-of-the-art output-text-based evaluators. By leveraging latent representations for fine-grained safety modeling, SRR establishes a novel paradigm for LLM safety assessment.

Technology Category

Application Category

📝 Abstract
The rapid advancement of large language models (LLMs) has demonstrated milestone success in a variety of tasks, yet their potential for generating harmful content has raised significant safety concerns. Existing safety evaluation approaches typically operate directly on textual responses, overlooking the rich information embedded in the model's internal representations. In this paper, we propose Safety Representation Ranking (SRR), a listwise ranking framework that selects safe responses using hidden states from the LLM itself. SRR encodes both instructions and candidate completions using intermediate transformer representations and ranks candidates via a lightweight similarity-based scorer. Our approach directly leverages internal model states and supervision at the list level to capture subtle safety signals. Experiments across multiple benchmarks show that SRR significantly improves robustness to adversarial prompts. Our code will be available upon publication.
Problem

Research questions and friction points this paper is trying to address.

Enhancing LLM safety by analyzing internal representations
Ranking safe responses using hidden states and similarity
Improving robustness against adversarial prompts via SRR
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses hidden states for safety ranking
Lightweight similarity-based scorer
Improves robustness to adversarial prompts
🔎 Similar Papers
No similar papers found.
Tianqi Du
Tianqi Du
PhD Student, Peking University
Machine learning
Zeming Wei
Zeming Wei
Ph.D. Candidate, Peking University
Trustworthy AIAdversarial RobustnessExplainability
Q
Quan Chen
School of Mathematical Sciences, Peking University
C
Chenheng Zhang
State Key Lab of General Artificial Intelligence, School of Intelligence Science and Technology, Peking University
Yisen Wang
Yisen Wang
Assistant Professor, Peking University
Machine LearningSelf-Supervised LearningLarge Language ModelsSafety