🤖 AI Summary
This work investigates how AI systems encode semantic structures into the geometric properties of their representation spaces, with a focus on representations under the softmax distribution. Drawing upon information geometry, the study reveals for the first time the intrinsic geometric structure of the softmax-induced representation space and introduces a novel “dual manipulation” approach. This method employs linear probes to precisely steer representations toward activating specific target concepts while minimizing interference with non-target concepts. Theoretical analysis demonstrates that the proposed technique achieves an optimal trade-off between effective modification of target concepts and preservation of non-target ones, thereby significantly enhancing the controllability and stability of concept manipulation in neural representations.
📝 Abstract
This paper concerns the question of how AI systems encode semantic structure into the geometric structure of their representation spaces. The motivating observation of this paper is that the natural geometry of these representation spaces should reflect the way models use representations to produce behavior. We focus on the important special case of representations that define softmax distributions. In this case, we argue that the natural geometry is information geometry. Our focus is on the role of information geometry on semantic encoding and the linear representation hypothesis. As an illustrative application, we develop "dual steering", a method for robustly steering representations to exhibit a particular concept using linear probes. We prove that dual steering optimally modifies the target concept while minimizing changes to off-target concepts. Empirically, we find that dual steering enhances the controllability and stability of concept manipulation.