Online Submission and Evaluation System Design for Competition Operations

📅 2025-07-23
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Organizing periodic algorithm competitions poses significant challenges, including cumbersome submission management, poor cross-platform compatibility, and non-reproducible evaluations. This paper proposes a scalable, automated online competition system that integrates a web service architecture, task queues, and Docker-based containerization to fully automate submission ingestion, isolated execution, and automatic grading. Its key contribution is a lightweight, container-based environment isolation mechanism that ensures evaluation fairness and result reproducibility while enabling unified assessment across heterogeneous development environments. The system has been successfully deployed in multiple international competitions—including the Grid-Based Pathfinding Competition and the League of Robot Runners—demonstrating substantial reductions in organizational overhead, improved grading efficiency and accuracy, and robust support for longitudinal tracking of algorithmic progress. It establishes a sustainable, production-grade technical infrastructure for competitive algorithm evaluation.

Technology Category

Application Category

📝 Abstract
Research communities have developed benchmark datasets across domains to compare the performance of algorithms and techniques However, tracking the progress in these research areas is not easy, as publications appear in different venues at the same time, and many of them claim to represent the state-of-the-art. To address this, research communities often organise periodic competitions to evaluate the performance of various algorithms and techniques, thereby tracking advancements in the field. However, these competitions pose a significant operational burden. The organisers must manage and evaluate a large volume of submissions. Furthermore, participants typically develop their solutions in diverse environments, leading to compatibility issues during the evaluation of their submissions. This paper presents an online competition system that automates the submission and evaluation process for a competition. The competition system allows organisers to manage large numbers of submissions efficiently, utilising isolated environments to evaluate submissions. This system has already been used successfully for several competitions, including the Grid-Based Pathfinding Competition and the League of Robot Runners competition.
Problem

Research questions and friction points this paper is trying to address.

Automates submission and evaluation for research competitions
Manages large submissions efficiently with isolated environments
Solves compatibility issues in diverse solution environments
Innovation

Methods, ideas, or system contributions that make the work stand out.

Automates submission and evaluation process
Manages large submissions with isolated environments
Used in Grid-Based Pathfinding and Robot Runners
🔎 Similar Papers
No similar papers found.
Z
Zhe Chen
Department of Data Science and Artificial Intelligence, Monash University, Australia
Daniel Harabor
Daniel Harabor
Monash University
Artificial IntelligenceHeuristic SearchOptimisation
R
Ryan Hechnenberger
Department of Data Science and Artificial Intelligence, Monash University, Australia
N
Nathan R. Sturtevant
Department of Computing Science, University of Alberta, Canada