🤖 AI Summary
This study investigates the self-attribution capability of large language models (LLaMA3-8B-Instruct)—i.e., their ability to recognize their own generated text—and its implications for AI safety.
Method: We combine behavioral evaluation with directional probing of residual stream representations (ROME and CAUSAL MIDDLE) to identify and causally validate a neurosymbolic vector encoding “self-authorship.” We further integrate activation visualization and vector editing techniques to achieve bidirectional modulation of this capability—enabling both active assertion/denial of authorship during generation and induced belief/doubt in self-authored content during reading.
Contribution/Results: We demonstrate that the model possesses robust self-recognition; the identified vector exhibits strong semantic specificity and precise controllability; and, for the first time, we achieve mechanistic-level intervention into the model’s “subjective perception.” This work establishes a novel paradigm for enhancing AI transparency, controllability, and trustworthy evaluation.
📝 Abstract
It has been reported that LLMs can recognize their own writing. As this has potential implications for AI safety, yet is relatively understudied, we investigate the phenomenon, seeking to establish whether it robustly occurs at the behavioral level, how the observed behavior is achieved, and whether it can be controlled. First, we find that the Llama3-8b-Instruct chat model - but not the base Llama3-8b model - can reliably distinguish its own outputs from those of humans, and present evidence that the chat model is likely using its experience with its own outputs, acquired during post-training, to succeed at the writing recognition task. Second, we identify a vector in the residual stream of the model that is differentially activated when the model makes a correct self-written-text recognition judgment, show that the vector activates in response to information relevant to self-authorship, present evidence that the vector is related to the concept of"self"in the model, and demonstrate that the vector is causally related to the model's ability to perceive and assert self-authorship. Finally, we show that the vector can be used to control both the model's behavior and its perception, steering the model to claim or disclaim authorship by applying the vector to the model's output as it generates it, and steering the model to believe or disbelieve it wrote arbitrary texts by applying the vector to them as the model reads them.