Selective Steering: Norm-Preserving Control Through Discriminative Layer Selection

📅 2026-01-27
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Despite alignment efforts, large language models remain vulnerable to adversarial attacks. Existing activation manipulation methods often suffer from sensitivity to coefficients, norm imbalance, or insufficient control granularity. This work proposes a norm-preserving rotation mechanism applied during inference, which manipulates the angular orientation of activation vectors in selected layers. Coupled with a discriminative layer selection strategy based on feature sign alignment, the approach enables precise behavioral intervention with minimal disruption. Evaluated across nine mainstream models, the method achieves an attack success rate 5.5 times higher than current state-of-the-art techniques, while maintaining zero perplexity violations and preserving nearly 100% of standard task performance.

Technology Category

Application Category

📝 Abstract
Despite significant progress in alignment, large language models (LLMs) remain vulnerable to adversarial attacks that elicit harmful behaviors. Activation steering techniques offer a promising inference-time intervention approach, but existing methods suffer from critical limitations: activation addition requires careful coefficient tuning and is sensitive to layer-specific norm variations, while directional ablation provides only binary control. Recent work on Angular Steering introduces continuous control via rotation in a 2D subspace, but its practical implementation violates norm preservation, causing distribution shift and generation collapse, particularly in models below 7B parameters. We propose Selective Steering, which addresses these limitations through two key innovations: (1) a mathematically rigorous norm-preserving rotation formulation that maintains activation distribution integrity, and (2) discriminative layer selection that applies steering only where feature representations exhibit opposite-signed class alignment. Experiments across nine models demonstrate that Selective Steering achieves 5.5x higher attack success rates than prior methods while maintaining zero perplexity violations and approximately 100\% capability retention on standard benchmarks. Our approach provides a principled, efficient framework for controllable and stable LLM behavior modification. Code: https://github.com/knoveleng/steering
Problem

Research questions and friction points this paper is trying to address.

adversarial attacks
activation steering
norm preservation
large language models
distribution shift
Innovation

Methods, ideas, or system contributions that make the work stand out.

norm-preserving rotation
discriminative layer selection
activation steering
Angular Steering
LLM alignment
🔎 Similar Papers
No similar papers found.