Steering When Necessary: Flexible Steering Large Language Models with Backtracking

📅 2025-08-24
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing alignment interventions for large language models (LLMs) suffer from coarse-grained activation steering and overreliance on a single input signal (e.g., the prompt), leading to insufficient precision in behavioral control. Method: We propose Flexible Activation Steering with Backtracking (FASB), a dynamic inference-time method that jointly monitors hidden-layer states, integrates both generated content and the input question to assess the necessity and intensity of intervention, and—upon detecting misalignment—backtracks to revise previously generated tokens. Contribution/Results: FASB is the first approach to unify activation steering with history-aware backtracking and re-generation, eliminating indiscriminate interventions and limitations of question-only decision-making. It achieves significant improvements over state-of-the-art baselines on TruthfulQA and six multiple-choice benchmarks, substantially enhancing both factual consistency and behavioral alignment.

Technology Category

Application Category

📝 Abstract
Large language models (LLMs) have achieved remarkable performance across many generation tasks. Nevertheless, effectively aligning them with desired behaviors remains a significant challenge. Activation steering is an effective and cost-efficient approach that directly modifies the activations of LLMs during the inference stage, aligning their responses with the desired behaviors and avoiding the high cost of fine-tuning. Existing methods typically indiscriminately intervene to all generations or rely solely on the question to determine intervention, which limits the accurate assessment of the intervention strength. To this end, we propose the Flexible Activation Steering with Backtracking (FASB) framework, which dynamically determines both the necessity and strength of intervention by tracking the internal states of the LLMs during generation, considering both the question and the generated content. Since intervening after detecting a deviation from the desired behavior is often too late, we further propose the backtracking mechanism to correct the deviated tokens and steer the LLMs toward the desired behavior. Extensive experiments on the TruthfulQA dataset and six multiple-choice datasets demonstrate that our method outperforms baselines. Our code will be released at https://github.com/gjw185/FASB.
Problem

Research questions and friction points this paper is trying to address.

Dynamically determining intervention necessity and strength for LLMs
Correcting deviated tokens with backtracking mechanism
Aligning LLM responses without costly fine-tuning
Innovation

Methods, ideas, or system contributions that make the work stand out.

Dynamic intervention based on internal states
Backtracking mechanism to correct deviations
Flexible steering considering question and content
🔎 Similar Papers
J
Jinwei Gan
State Key Laboratory for Novel Software Technology, Nanjing University, China
Z
Zifeng Cheng
State Key Laboratory for Novel Software Technology, Nanjing University, China
Zhiwei Jiang
Zhiwei Jiang
Nanjing University
Natural Language Processing
C
Cong Wang
State Key Laboratory for Novel Software Technology, Nanjing University, China
Y
Yafeng Yin
State Key Laboratory for Novel Software Technology, Nanjing University, China
Xiang Luo
Xiang Luo
Nanjing University
Natural Language ProcessingTask-Oriented Dialogue
Yuchen Fu
Yuchen Fu
Nanjing University
计算机视觉、多模态学习
Qing Gu
Qing Gu
Nanjing University