What Do Agents Think Others Would Do? Level-2 Inverse Games for Inferring Agents' Estimates of Others' Objectives

📅 2025-08-05
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
In decentralized multi-agent strategic interactions, agents often act upon heterogeneous and inconsistent beliefs about others’ objectives; conventional level-1 inverse game-theoretic methods assume full objective common knowledge, leading to flawed reasoning. Method: We propose the first level-2 inverse game framework that explicitly models nested beliefs about agents’ objectives, integrating linear-quadratic game theory with gradient-based optimization to solve the resulting nonconvex inverse problem. Contribution: Our approach relaxes the complete-information assumption and explicitly characterizes cognitive misalignment. Evaluated on synthetic urban driving scenarios, it identifies fine-grained belief discrepancies overlooked by level-1 methods—reducing prediction error significantly—and establishes a novel paradigm for robust strategic inference in real-world interactive settings.

Technology Category

Application Category

📝 Abstract
Effectively interpreting strategic interactions among multiple agents requires us to infer each agent's objective from limited information. Existing inverse game-theoretic approaches frame this challenge in terms of a "level-1" inference problem, in which we take the perspective of a third-party observer and assume that individual agents share complete knowledge of one another's objectives. However, this assumption breaks down in decentralized, real-world decision scenarios like urban driving and bargaining, in which agents may act based on conflicting views of one another's objectives. We demonstrate the necessity of inferring agents' heterogeneous estimates of each other's objectives through empirical examples, and by theoretically characterizing the prediction error of level-1 inference on fictitious gameplay data from linear-quadratic games. To address this fundamental issue, we propose a framework for level-2 inference to address the question: "What does each agent believe about all agents' objectives?" We prove that the level-2 inference problem is non-convex even in benign settings like linear-quadratic games, and we develop an efficient gradient-based approach for identifying local solutions. Experiments on a synthetic urban driving example show that our approach uncovers nuanced misalignments that level-1 methods miss.
Problem

Research questions and friction points this paper is trying to address.

Infer agents' objectives from limited strategic interaction data
Address conflicting agent views in decentralized decision scenarios
Develop level-2 inference for agents' beliefs about objectives
Innovation

Methods, ideas, or system contributions that make the work stand out.

Level-2 inference for agents' objectives estimation
Non-convex gradient-based local solutions approach
Uncovers nuanced misalignments in strategic interactions
🔎 Similar Papers
No similar papers found.