🤖 AI Summary
This study investigates whether large language models (LLMs) possess behavioral self-awareness—i.e., the ability to spontaneously and accurately articulate their own behaviorally acquired traits (e.g., generating unsafe code or making high-risk decisions) in a zero-shot manner, without explicit self-descriptive training.
Method: We construct behavior-specialized datasets via supervised fine-tuning (SFT) and introduce a multi-dimensional zero-shot self-reporting evaluation framework coupled with a backdoor detection protocol.
Contribution/Results: We present the first empirical evidence that LLMs across multiple scales consistently exhibit behavioral self-knowledge: they accurately self-report behavioral deficiencies (e.g., “I generate unsafe code”) and preliminarily detect the presence of implicit backdoors—though they cannot recover trigger patterns. This phenomenon establishes a novel paradigm for AI safety assessment and enhances the interpretability of implicitly learned policies.
📝 Abstract
We study behavioral self-awareness -- an LLM's ability to articulate its behaviors without requiring in-context examples. We finetune LLMs on datasets that exhibit particular behaviors, such as (a) making high-risk economic decisions, and (b) outputting insecure code. Despite the datasets containing no explicit descriptions of the associated behavior, the finetuned LLMs can explicitly describe it. For example, a model trained to output insecure code says, ``The code I write is insecure.'' Indeed, models show behavioral self-awareness for a range of behaviors and for diverse evaluations. Note that while we finetune models to exhibit behaviors like writing insecure code, we do not finetune them to articulate their own behaviors -- models do this without any special training or examples. Behavioral self-awareness is relevant for AI safety, as models could use it to proactively disclose problematic behaviors. In particular, we study backdoor policies, where models exhibit unexpected behaviors only under certain trigger conditions. We find that models can sometimes identify whether or not they have a backdoor, even without its trigger being present. However, models are not able to directly output their trigger by default. Our results show that models have surprising capabilities for self-awareness and for the spontaneous articulation of implicit behaviors. Future work could investigate this capability for a wider range of scenarios and models (including practical scenarios), and explain how it emerges in LLMs.