🤖 AI Summary
To address the challenges of high exploration bias, incomplete scene coverage, and low efficiency in automated testing of 3D games, this paper proposes cMarlTest, a curiosity-driven multi-agent reinforcement learning framework. cMarlTest innovatively integrates an Intrinsic Curiosity Module (ICM) with multi-agent RL, enabling collaborative exploration and task decomposition among agents to jointly optimize state, action, and path coverage. It further employs distributed policy learning to enhance training stability. Experimental evaluation across multiple 3D game levels demonstrates that cMarlTest achieves, on average, a 27.4% improvement in test coverage over single-agent RL baselines, while reducing the time required to reach equivalent coverage by 39.1%. These results substantiate significant gains in testing breadth, depth, and efficiency.
📝 Abstract
Recently testing of games via autonomous agents has shown great promise in tackling challenges faced by the game industry, which mainly relied on either manual testing or record/replay. In particular Reinforcement Learning (RL) solutions have shown potential by learning directly from playing the game without the need for human intervention. In this paper, we present cMarlTest, an approach for testing 3D games through curiosity driven Multi-Agent Reinforcement Learning (MARL). cMarlTest deploys multiple agents that work collaboratively to achieve the testing objective. The use of multiple agents helps resolve issues faced by a single agent approach. We carried out experiments on different levels of a 3D game comparing the performance of cMarlTest with a single agent RL variant. Results are promising where, considering three different types of coverage criteria, cMarlTest achieved higher coverage. cMarlTest was also more efficient in terms of the time taken, with respect to the single agent based variant.