Root Defence Strategies: Ensuring Safety of LLM at the Decoding Level

📅 2024-10-09
🏛️ arXiv.org
📈 Citations: 1
Influential: 0
📄 PDF
🤖 AI Summary
Large language models (LLMs) exhibit increased risk of generating harmful content under jailbreak attacks, while existing defenses suffer from insufficient utilization of decoding-time information and excessive rejection that degrades model utility. Method: This paper proposes the first decoding-level, token-wise proactive defense framework. It dynamically assesses the harmfulness of each token during autoregressive generation and leverages the model’s intrinsic discriminative capability to adjust outputs—rather than indiscriminately rejecting them—and incorporates speculative decoding to accelerate safety-aware inference. Contribution/Results: Experiments demonstrate that our method significantly improves robustness against jailbreak attacks—achieving state-of-the-art safety performance—while preserving the base model’s inference speed and task-oriented helpfulness.

Technology Category

Application Category

📝 Abstract
Large language models (LLMs) have demonstrated immense utility across various industries. However, as LLMs advance, the risk of harmful outputs increases due to incorrect or malicious instruction prompts. While current methods effectively address jailbreak risks, they share common limitations: 1) Judging harmful responses from the prefill-level lacks utilization of the model's decoding outputs, leading to relatively lower effectiveness and robustness. 2) Rejecting potentially harmful responses based on a single evaluation can significantly impair the model's helpfulness.This paper examines the LLMs' capability to recognize harmful outputs, revealing and quantifying their proficiency in assessing the danger of previous tokens. Motivated by pilot experiment results, we design a robust defense mechanism at the decoding level. Our novel decoder-oriented, step-by-step defense architecture corrects harmful queries directly rather than rejecting them outright. We introduce speculative decoding to enhance usability and facilitate deployment to boost secure decoding speed. Extensive experiments demonstrate that our approach improves model security without compromising reasoning speed. Notably, our method leverages the model's ability to discern hazardous information, maintaining its helpfulness compared to existing methods.
Problem

Research questions and friction points this paper is trying to address.

Enhancing LLM safety during decoding
Preventing harmful outputs without rejection
Maintaining model helpfulness and speed
Innovation

Methods, ideas, or system contributions that make the work stand out.

Decoder-oriented step-by-step defense architecture
Speculative decoding enhances secure decoding speed
Corrects harmful queries directly, maintains model helpfulness
🔎 Similar Papers
No similar papers found.
Xinyi Zeng
Xinyi Zeng
Sichuan University
Medical Image SegmentationMedical Image ReconstructionMulti-modal Learning
Y
Yuying Shang
Aerospace Information Research Institute, Chinese Academy of Sciences, Beijing, China
Y
Yutao Zhu
Gaoling School of Artificial Intelligence, Renmin University of China
J
Jiawei Chen
Shanghai Key Laboratory of Multi. Info. Processing, East China Normal University
Y
Yu Tian
Dept. of Comp. Sci. and Tech., Institute for AI, Tsinghua University, Beijing, China