🤖 AI Summary
Existing inverse game-theoretic approaches overlook the learner’s agency, particularly in bounded-rational Stackelberg games where a leader must infer the follower’s cost function. Method: This paper introduces active learning into inverse game theory for the first time, proposing an action-selection strategy that maximizes Fisher information-based expected information gain to balance exploration and exploitation. Under quadratic cost assumptions, we integrate maximum-likelihood estimation with explicit game-theoretic modeling. Contribution/Results: We establish theoretical guarantees—namely, asymptotic consistency of parameter estimates and convergence of the inferred Stackelberg equilibrium. Simulation results demonstrate that the proposed method significantly improves both learning efficiency and estimation accuracy, achieving faster convergence to the true Stackelberg equilibrium compared to passive baselines.
📝 Abstract
Inverse game theory is utilized to infer the cost functions of all players based on game outcomes. However, existing inverse game theory methods do not consider the learner as an active participant in the game, which could significantly enhance the learning process. In this paper, we extend inverse game theory to active inverse methods. For Stackelberg games with bounded rationality, the leader, acting as a learner, actively chooses actions to better understand the follower's cost functions. First, we develop a method of active learning by leveraging Fisher information to maximize information gain about the unknown parameters and prove the consistency and asymptotic normality. Additionally, when leaders consider its cost, we develop a method of active inverse game to balance exploration and exploitation, and prove the consistency and asymptotic Stackelberg equilibrium with quadratic cost functions. Finally, we verify the properties of these methods through simulations in the quadratic case and demonstrate that the active inverse game method can achieve Stackelberg equilibrium more quickly through active exploration.