🤖 AI Summary
This study investigates the dual mechanisms underlying value expression in large language models (LLMs): *intrinsic acquisition*—values internalized during pretraining—and *prompt elicitation*—values activated via instructions. Addressing the open question of whether these mechanisms share neural substrates, we propose value vector analysis and value neuron attribution to dissect their functional pathways within residual streams and MLP layers. We find that while both mechanisms recruit overlapping representational components, they exhibit functional specialization: intrinsic acquisition governs lexical diversity and long-range semantic coherence, whereas prompt elicitation enhances instruction adherence and controllability—particularly evident in distant tasks such as jailbreaking. This work provides the first systematic characterization of a “shared-specialized” neuroarchitectural principle for value expression, offering mechanistic insights and actionable levers for value alignment, role-guided generation, and controllable output synthesis.
📝 Abstract
Large language models (LLMs) can express different values in two distinct ways: (1) intrinsic expression, reflecting the model's inherent values learned during training, and (2) prompted expression, elicited by explicit prompts. Given their widespread use in value alignment and persona steering, it is paramount to clearly understand their underlying mechanisms, particularly whether they mostly overlap (as one might expect) or rely on substantially different mechanisms, but this remains largely understudied. We analyze this at the mechanistic level using two approaches: (1) value vectors, feature directions representing value mechanisms extracted from the residual stream, and (2) value neurons, MLP neurons that contribute to value expressions. We demonstrate that intrinsic and prompted value mechanisms partly share common components that are crucial for inducing value expression, but also possess unique elements that manifest in different ways. As a result, these mechanisms lead to different degrees of value steerability (prompted > intrinsic) and response diversity (intrinsic > prompted). In particular, components unique to the intrinsic mechanism seem to promote lexical diversity in responses, whereas those specific to the prompted mechanism primarily strengthen instruction following, taking effect even in distant tasks like jailbreaking.