Improved Representation Steering for Language Models

📅 2025-05-27
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing representation-guided methods exhibit significantly weaker concept-level generation control—particularly in introducing or suppressing specific concepts—compared to prompt engineering, and suffer from reliance on reference texts and poor robustness. To address this, we propose Reference-free Preference Steering (RePS), the first framework to introduce a reference-free, bidirectional preference optimization objective that unifies concept guidance and suppression within a single formulation. RePS performs parameter-efficient fine-tuning directly in the representation space, eliminating dependence on reference prompts. Evaluated on the AxBench benchmark across Gemma models ranging from 2B to 27B parameters, RePS consistently outperforms prior representation-guided approaches, surpasses language modeling baselines in concept suppression tasks, and demonstrates strong robustness against prompt-based jailbreaking attacks. Collectively, RePS substantially narrows the performance gap between representation-guided control and prompt engineering.

Technology Category

Application Category

📝 Abstract
Steering methods for language models (LMs) seek to provide fine-grained and interpretable control over model generations by variously changing model inputs, weights, or representations to adjust behavior. Recent work has shown that adjusting weights or representations is often less effective than steering by prompting, for instance when wanting to introduce or suppress a particular concept. We demonstrate how to improve representation steering via our new Reference-free Preference Steering (RePS), a bidirectional preference-optimization objective that jointly does concept steering and suppression. We train three parameterizations of RePS and evaluate them on AxBench, a large-scale model steering benchmark. On Gemma models with sizes ranging from 2B to 27B, RePS outperforms all existing steering methods trained with a language modeling objective and substantially narrows the gap with prompting -- while promoting interpretability and minimizing parameter count. In suppression, RePS matches the language-modeling objective on Gemma-2 and outperforms it on the larger Gemma-3 variants while remaining resilient to prompt-based jailbreaking attacks that defeat prompting. Overall, our results suggest that RePS provides an interpretable and robust alternative to prompting for both steering and suppression.
Problem

Research questions and friction points this paper is trying to address.

Improving representation steering for language models
Enhancing concept steering and suppression simultaneously
Bridging performance gap between steering methods and prompting
Innovation

Methods, ideas, or system contributions that make the work stand out.

Bidirectional preference-optimization for concept steering
Reference-free Preference Steering (RePS) method
Interpretable robust alternative to prompting
🔎 Similar Papers