Learning from Synthetic Labs: Language Models as Auction Participants

📅 2025-07-11
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study addresses the high cost and poor scalability of traditional auction experiments by proposing a virtual auction simulation framework powered by large language models (LLMs). Methodologically, it employs chain-of-thought–capable LLMs as intelligent bidders, augmented with Nash-deviation–guided prompting to replicate empirically observed behaviors—including risk aversion and the winner’s curse—and supports flexible configuration across multiple LLMs and auction mechanisms (e.g., English, Dutch, and sealed-bid). The key contributions are: (i) the first low-cost, highly reproducible, and scalable virtual auction experimentation platform, with per-experiment costs under $400; (ii) execution of over one thousand simulated auctions, achieving a three-order-of-magnitude efficiency gain over human-subject experiments; and (iii) empirical validation showing that LLM bidder behavior closely aligns with human experimental findings in the literature and, under strategy-proof mechanisms, converges more closely to theoretical equilibrium predictions.

Technology Category

Application Category

📝 Abstract
This paper investigates the behavior of simulated AI agents (large language models, or LLMs) in auctions, introducing a novel synthetic data-generating process to help facilitate the study and design of auctions. We find that LLMs -- when endowed with chain of thought reasoning capacity -- agree with the experimental literature in auctions across a variety of classic auction formats. In particular, we find that LLM bidders produce results consistent with risk-averse human bidders; that they perform closer to theoretical predictions in obviously strategy-proof auctions; and, that they succumb to the winner's curse in common value settings. On prompting, we find that LLMs are not very sensitive to naive changes in prompts (e.g., language, currency) but can improve dramatically towards theoretical predictions with the right mental model (i.e., the language of Nash deviations). We run 1,000$+$ auctions for less than $$$400 with GPT-4 models (three orders of magnitude cheaper than modern auction experiments) and develop a framework flexible enough to run auction experiments with any LLM model and a wide range of auction design specifications, facilitating further experimental study by decreasing costs and serving as a proof-of-concept for the use of LLM proxies.
Problem

Research questions and friction points this paper is trying to address.

Study LLM behavior in auctions using synthetic data
Compare LLM bidding with human risk-averse tendencies
Develop cost-effective LLM-based auction experiment framework
Innovation

Methods, ideas, or system contributions that make the work stand out.

LLMs simulate auction participants with chain-of-thought reasoning
Synthetic data-generating process enables cost-effective auction studies
Flexible framework supports diverse LLM models and auction designs
🔎 Similar Papers
No similar papers found.