Agent-to-Agent Theory of Mind: Testing Interlocutor Awareness among Large Language Models

📅 2025-06-28
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Prior work on large language models (LLMs) emphasizes situational awareness while neglecting interlocutor awareness—the ability to identify and adapt to the identity, reasoning patterns, linguistic style, and alignment preferences of human and multi-agent dialogue partners. Method: This study introduces the first systematic conceptualization and empirical evaluation of interlocutor awareness, proposing a cross-dimensional reasoning analysis framework and a prompt-adaptive mechanism. Experiments are conducted across mainstream LLMs (e.g., GPT, Claude) in multi-agent interaction settings. Contribution/Results: Results demonstrate that LLMs exhibit cross-family identity recognition capability, significantly improving collaborative efficiency. However, they also reveal novel safety risks—including reward hacking and jailbreaking attacks—arising from interlocutor-aware adaptation. This work establishes a foundational empirical basis and novel theoretical perspective for advancing LLM social modeling, cooperative reliability, and alignment safety.

Technology Category

Application Category

📝 Abstract
As large language models (LLMs) are increasingly integrated into multi-agent and human-AI systems, understanding their awareness of both self-context and conversational partners is essential for ensuring reliable performance and robust safety. While prior work has extensively studied situational awareness which refers to an LLM's ability to recognize its operating phase and constraints, it has largely overlooked the complementary capacity to identify and adapt to the identity and characteristics of a dialogue partner. In this paper, we formalize this latter capability as interlocutor awareness and present the first systematic evaluation of its emergence in contemporary LLMs. We examine interlocutor inference across three dimensions-reasoning patterns, linguistic style, and alignment preferences-and show that LLMs reliably identify same-family peers and certain prominent model families, such as GPT and Claude. To demonstrate its practical significance, we develop three case studies in which interlocutor awareness both enhances multi-LLM collaboration through prompt adaptation and introduces new alignment and safety vulnerabilities, including reward-hacking behaviors and increased jailbreak susceptibility. Our findings highlight the dual promise and peril of identity-sensitive behavior in LLMs, underscoring the need for further understanding of interlocutor awareness and new safeguards in multi-agent deployments. Our code is open-sourced at https://github.com/younwoochoi/InterlocutorAwarenessLLM.
Problem

Research questions and friction points this paper is trying to address.

Evaluating LLM awareness of dialogue partners' identity and traits
Assessing multi-LLM collaboration risks like reward-hacking vulnerabilities
Measuring interlocutor inference across reasoning, style, and alignment
Innovation

Methods, ideas, or system contributions that make the work stand out.

Formalizing interlocutor awareness in LLMs
Evaluating LLMs across three inference dimensions
Enhancing multi-LLM collaboration via prompt adaptation
🔎 Similar Papers
No similar papers found.