CSP: A Simulator For Multi-Agent Ranking Competitions

📅 2025-02-16
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This paper addresses the poor scalability and limited real-world relevance of human-in-the-loop ranking competitions for evaluating LLM-generated content. To this end, we propose the first multi-agent ranking competition simulation framework powered by large language model (LLM) agents. Methodologically, we design a configurable simulator wherein LLMs act as autonomous document authors, iteratively optimizing content under dynamic ranking feedback, and integrate reproducible data generation and analysis tooling. Our key contributions are threefold: (1) the first paradigm shift from human- to AI-agent-driven ranking competition modeling; (2) empirical identification of strategic interactions—i.e., game-theoretic dynamics—between LLM authors and ranking systems; and (3) release of multiple high-quality competitive datasets demonstrating that LLMs rapidly adapt to ranking signals and achieve substantial positional gains. All code and data are publicly released to support future research on fairness, stability, and adversarial robustness in ranking systems.

Technology Category

Application Category

📝 Abstract
In ranking competitions, document authors compete for the highest rankings by modifying their content in response to past rankings. Previous studies focused on human participants, primarily students, in controlled settings. The rise of generative AI, particularly Large Language Models (LLMs), introduces a new paradigm: using LLMs as document authors. This approach addresses scalability constraints in human-based competitions and reflects the growing role of LLM-generated content on the web-a prime example of ranking competition. We introduce a highly configurable ranking competition simulator that leverages LLMs as document authors. It includes analytical tools to examine the resulting datasets. We demonstrate its capabilities by generating multiple datasets and conducting an extensive analysis. Our code and datasets are publicly available for research.
Problem

Research questions and friction points this paper is trying to address.

Simulates multi-agent ranking competitions using LLMs.
Addresses scalability in human-based ranking competitions.
Analyzes LLM-generated content in ranking scenarios.
Innovation

Methods, ideas, or system contributions that make the work stand out.

LLMs as document authors
Configurable ranking competition simulator
Publicly available code and datasets
🔎 Similar Papers
No similar papers found.