Understanding the Role of Large Language Models in Competitive Programming

📅 2025-09-19
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study investigates the systemic impact of large language models (LLMs) on competitive programming ecosystems, focusing on fairness crises and governance failures exacerbated by AI-assisted cheating. Employing a mixed-methods approach—including 37 in-depth interviews and 207 globally distributed participant surveys—the research identifies how LLMs reshape workflows and role boundaries among competitors, problem authors, coaches, and platform administrators. It proposes a novel, chess-inspired multi-tiered governance framework integrating real-time LLM behavior detection, peer-based oversight, and offline cross-verification of submissions—balancing technical augmentation with integrity preservation. The study uncovers critical regulatory gaps in current competition policies and offers actionable, empirically grounded interventions. By bridging theory and practice, it establishes both a conceptual foundation and an implementable governance paradigm for high-fidelity technical competitions in the AI era.

Technology Category

Application Category

📝 Abstract
This paper investigates how large language models (LLMs) are reshaping competitive programming. The field functions as an intellectual contest within computer science education and is marked by rapid iteration, real-time feedback, transparent solutions, and strict integrity norms. Prior work has evaluated LLMs performance on contest problems, but little is known about how human stakeholders -- contestants, problem setters, coaches, and platform stewards -- are adapting their workflows and contest norms under LLMs-induced shifts. At the same time, rising AI-assisted misuse and inconsistent governance expose urgent gaps in sustaining fairness and credibility. Drawing on 37 interviews spanning all four roles and a global survey of 207 contestants, we contribute: (i) an empirical account of evolving workflows, (ii) an analysis of contested fairness norms, and (iii) a chess-inspired governance approach with actionable measures -- real-time LLMs checks in online contests, peer co-monitoring and reporting, and cross-validation against offline performance -- to curb LLMs-assisted misuse while preserving fairness, transparency, and credibility.
Problem

Research questions and friction points this paper is trying to address.

Investigating human adaptation to LLMs in competitive programming
Addressing AI misuse and governance gaps for fairness
Proposing real-time checks and peer monitoring solutions
Innovation

Methods, ideas, or system contributions that make the work stand out.

Empirical analysis of evolving human workflows
Analysis of contested fairness norms in programming
Chess-inspired governance with real-time LLM checks
🔎 Similar Papers
No similar papers found.