Inferring Latent Intentions: Attributional Natural Language Inference in LLM Agents

📅 2026-01-13
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Traditional natural language inference (NLI) struggles to model the latent intentions underlying agents’ behaviors in multi-agent environments. To address this limitation, this work proposes Attributional NLI (Att-NLI), a novel framework that introduces abductive-deductive reasoning into NLI for the first time: it generates intention hypotheses through abduction and validates them deductively by incorporating principles from social psychology, thereby establishing a new paradigm for intention attribution. The framework further integrates neuro-symbolic methods by combining large language models with an external theorem prover to enhance logical rigor. Experimental results demonstrate that neuro-symbolic Att-NLI agents achieve an average win rate of 17.08% in the Undercover-V text-based game, significantly outperforming baseline approaches.

Technology Category

Application Category

📝 Abstract
Attributional inference, the ability to predict latent intentions behind observed actions, is a critical yet underexplored capability for large language models (LLMs) operating in multi-agent environments. Traditional natural language inference (NLI), in fact, fails to capture the nuanced, intention-driven reasoning essential for complex interactive systems. To address this gap, we introduce Attributional NLI (Att-NLI), a framework that extends NLI with principles from social psychology to assess an agent's capacity for abductive intentional inference (generating hypotheses about latent intentions), and subsequent deductive verification (drawing valid logical conclusions). We instantiate Att-NLI via a textual game, Undercover-V, experimenting with three types of LLM agents with varying reasoning capabilities and access to external tools: a standard NLI agent using only deductive inference, an Att-NLI agent employing abductive-deductive inference, and a neuro-symbolic Att-NLI agent performing abductive-deductive inference with external theorem provers. Extensive experiments demonstrate a clear hierarchy of attributional inference capabilities, with neuro-symbolic agents consistently outperforming others, achieving an average win rate of 17.08%. Our results underscore the role that Att-NLI can play in developing agents with sophisticated reasoning capabilities, highlighting, at the same time, the potential impact of neuro-symbolic AI in building rational LLM agents acting in multi-agent environments.
Problem

Research questions and friction points this paper is trying to address.

attributional inference
latent intentions
natural language inference
multi-agent environments
abductive reasoning
Innovation

Methods, ideas, or system contributions that make the work stand out.

Attributional NLI
abductive-deductive inference
neuro-symbolic AI
latent intention inference
multi-agent reasoning
🔎 Similar Papers
No similar papers found.