π€ AI Summary
This work addresses the challenge that large language models (LLMs) struggle to adapt effectively in repeated strategic interactions against unknown or dynamic opponents using only offline training. To overcome this limitation, the study proposes a novel online policy adaptation framework that integrates smoothed fictitious play (sFP) theory into LLM inference without requiring parameter updates. The approach leverages in-context learning to model the opponentβs behavior and employs Best-of-N adversarial simulations to generate optimal responses. Crucially, it dynamically allocates additional computational resources during inference to iteratively update beliefs and refine strategies. Evaluated on two repeated negotiation tasks, the method significantly outperforms multiple baselines, demonstrating both its effectiveness and scalability in interactive strategic settings.
π Abstract
While large language models (LLMs) have emerged as powerful decision-makers across a wide range of single-agent and stationary environments, fewer efforts have been devoted to settings where LLMs must engage in \emph{repeated} and \emph{strategic} interactions with unknown or dynamic opponents. In such settings, recipes built upon \emph{offline} pre-training or fine-tuning, though robust against worst-case adversaries, do not fully exploit the capability of LLMs to adapt \emph{online} based on interaction feedback. Instead, we explore the more natural perspective of scaling inference-time computation as a mechanism for adaptation, embedding the principles of a classical game-theoretical learning dynamic, \emph{smooth Fictitious Play (sFP)}, into LLM inference: (i) for belief formation, we employ an auxiliary opponent model that in-context learns to imitate the time-averaged behavior of the opponent; (ii) for best response, we advance best-of-$N$ (BoN) sampling by simulating against the opponent model. Empirical evaluations on two distinct forms of repeated negotiation games demonstrate that our method enables significant performance improvement over repeated online interaction compared to various baselines, offering a scalable and principled approach to repeated strategic decision-making without any parameter updates.