๐ค AI Summary
This work addresses the low precision and poor interpretability of output control during large language model (LLM) inference. We propose a novel activation-space intervention method based on Conceptorsโthe first application of Conceptors to LLM control. Our approach models semantic concepts as ellipsoidal regions in the activation space and employs soft projection matrices to achieve nonlinear, robust activation modulation. Crucially, it supports Boolean operations (e.g., AND, OR), enabling compositional, cooperative concept guidance and overcoming the linearity limitations of conventional vector arithmetic. Experiments across multiple controllable generation tasks demonstrate consistent superiority over baselines: Boolean-combined Conceptors yield an average 3.2% absolute accuracy improvement over simple vector addition. The implementation is publicly available.
๐ Abstract
Large language models have transformed AI, yet reliably controlling their outputs remains a challenge. This paper explores activation engineering, where outputs of pre-trained LLMs are controlled by manipulating their activations at inference time. Unlike traditional methods using a single steering vector, we introduce conceptors - mathematical constructs that represent sets of activation vectors as ellipsoidal regions. Conceptors act as soft projection matrices and offer more precise control over complex activation patterns. Our experiments demonstrate that conceptors outperform traditional methods across multiple steering tasks. We further use Boolean operations on conceptors for combined steering goals that empirically outperform additively combining steering vectors on a set of tasks. These results highlight conceptors as a promising tool for more effective steering of LLMs. Our code is available on github.com/jorispos/conceptorsteering.