🤖 AI Summary
This study investigates strategic interactions between humans and artificial intelligence agents in non-cooperative games, with a focus on how human deviations from rational utility maximization—driven by cognitive biases such as reference dependence and loss aversion—affect equilibrium outcomes. For the first time, prospect theory is employed to model human preferences, while AI agents adhere to expected utility maximization. Mixed-population simulations are conducted across both canonical and custom-designed matrix games. The results reveal multiple emergent behavioral patterns: in some scenarios, human and AI strategies become indistinguishable; in others, anomalies predicted by prospect theory are empirically validated. Moreover, several unanticipated interaction dynamics emerge, collectively offering a systematic account of the complex strategic mechanisms governing human–AI mixed populations in competitive environments.
📝 Abstract
This paper investigates the dynamics of noncooperative interactions between artificial intelligence agents and human decision-makers in strategic environments. In particular, motivated by extensive literature in behavioral Economics, human agents are more faithfully modeled with respect to the state of the art using Prospect Theoretic preferences, while AI agents are modeled with standard expected utility maximization. Prospect Theory incorporates known cognitive heuristics employed by humans, including reference dependence and greater loss aversion relative to utility to relative gains. This paper runs different combinations of expected utility and prospect theoretic agents in a number of classic matrix games as well as examples specialized to tease out distinctions in strategic behavior with respect to preference functions, to explore the emergent behaviors from mixed population (human vs. AI) competition. Extensive numerical simulations are performed across AI, aware humans (those with full knowledge of the game structure and payoffs), and learning Prospect Agents (i.e., for AIs representing humans). A number of interesting observations and patterns show up, spanning barely distinguishable behavior, behavior corroborating Prospect preference anomalies in the theoretical literature, and unexpected surprises. Code can be found at https://github.com/dylanwaldner/noncooperative-human-AI.