Inaugural MOASEI Competition at AAMAS'2025: A Technical Report

📅 2025-07-07
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Multi-agent decision-making in open-world environments faces challenges including dynamic evolution, partial observability, and continuous arrival/departure of agents and tasks. Method: This paper introduces MOASEI, a benchmark competition platform built upon the Free Zoo environment, featuring three tracks—Wildfire, Rideshare, and Cybersecurity. It proposes the first dynamic agent-and-task onboarding/offboarding mechanism and a multidimensional evaluation framework assessing adaptability, robustness, and responsiveness. The platform integrates graph neural networks, convolutional architectures, predictive modeling, and large language model–driven meta-optimization to enable collaborative perception and decision-making under complex open-world conditions. Contribution/Results: MOASEI attracted 11 international teams; four submitted valid solutions, empirically validating the proposed methods. It delivers the first reproducible open-world multi-agent benchmark framework and accompanying empirical dataset, advancing both theoretical understanding and practical development of open agent systems.

Technology Category

Application Category

📝 Abstract
We present the Methods for Open Agent Systems Evaluation Initiative (MOASEI) Competition, a multi-agent AI benchmarking event designed to evaluate decision-making under open-world conditions. Built on the free-range-zoo environment suite, MOASEI introduced dynamic, partially observable domains with agent and task openness--settings where entities may appear, disappear, or change behavior over time. The 2025 competition featured three tracks--Wildfire, Rideshare, and Cybersecurity--each highlighting distinct dimensions of openness and coordination complexity. Eleven teams from international institutions participated, with four of those teams submitting diverse solutions including graph neural networks, convolutional architectures, predictive modeling, and large language model--driven meta--optimization. Evaluation metrics centered on expected utility, robustness to perturbations, and responsiveness to environmental change. The results reveal promising strategies for generalization and adaptation in open environments, offering both empirical insight and infrastructure for future research. This report details the competition's design, findings, and contributions to the open-agent systems research community.
Problem

Research questions and friction points this paper is trying to address.

Evaluates decision-making in open-world multi-agent systems
Assesses agent adaptability to dynamic, partially observable environments
Measures robustness and responsiveness in changing task conditions
Innovation

Methods, ideas, or system contributions that make the work stand out.

Dynamic partially observable multi-agent domains
Graph neural networks for agent coordination
Large language model-driven meta-optimization
🔎 Similar Papers
No similar papers found.