Steering Code LLMs with Activation Directions for Language and Library Control

📅 2026-03-24
📈 Citations: 0
Influential: 0
📄 PDF
📝 Abstract
Code LLMs often default to particular programming languages and libraries under neutral prompts. We investigate whether these preferences are encoded as approximately linear directions in activation space that can be manipulated at inference time. Using a difference-in-means method, we estimate layer-wise steering vectors for five language/library pairs and add them to model hidden states during generation. Across three open-weight code LLMs, these interventions substantially increase generation toward the target ecosystem under neutral prompts and often remain effective even when prompts explicitly request the opposite choice. Steering strength varies by model and target, with common ecosystems easier to induce than rarer alternatives, and overly strong interventions can reduce output quality. Overall, our results suggest that code-style preferences in LLMs are partly represented by compact, steerable structure in activation space.
Problem

Research questions and friction points this paper is trying to address.

Code LLMs
language preference
library bias
neutral prompts
activation space
Innovation

Methods, ideas, or system contributions that make the work stand out.

activation steering
code LLMs
language control
library preference
linear directions
🔎 Similar Papers
No similar papers found.