🤖 AI Summary
This work addresses the limitations of existing activation steering methods, which are highly susceptible to high-dimensional noise and inter-layer semantic drift, thereby struggling to accurately align with target intents. The authors propose GER-steer, a training-free, general-purpose framework that introduces, for the first time, a global evolution signal based on cross-layer consistency. By leveraging the geometric stability of neural network representation dynamics, GER-steer globally refines the original steering vectors to isolate robust semantic directions. The approach synergistically integrates activation engineering, geometric stability analysis, and high-dimensional vector correction, eliminating the need for fine-tuning or layer-wise hyperparameter tuning. Experimental results demonstrate that GER-steer significantly outperforms current baselines in steering efficacy, generalization capability, and model alignment reliability.
📝 Abstract
Activation engineering enables precise control over Large Language Models (LLMs) without the computational cost of fine-tuning. However, existing methods deriving vectors from static activation differences are susceptible to high-dimensional noise and layer-wise semantic drift, often capturing spurious correlations rather than the target intent. To address this, we propose Global Evolutionary Refined Steering (GER-steer), a training-free framework that grounded in the geometric stability of the network's representation evolution. GER-steer exploits this global signal to rectify raw steering vectors, effectively decoupling robust semantic intent from orthogonal artifacts. Extensive evaluations confirm that GER-steer consistently outperforms baselines, delivering superior efficacy and generalization without layer-specific tuning, establishing a universal solution for reliable model alignment.