Fly, Fail, Fix: Iterative Game Repair with Reinforcement Learning and Large Multimodal Models

πŸ“… 2025-07-16
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
Existing game design tools rely on static code and resource analysis, failing to model the dynamic mapping from game rules to player behaviorβ€”thus hindering gameplay prediction and optimization. This paper introduces the first automated iterative framework integrating reinforcement learning (RL) agents with large multimodal models (LMMs). An RL agent executes gameplay in the live environment, generating multimodal behavioral trajectories comprising numerical logs and screen-frame sequences. The LMM processes these trajectories via image-strip encoding to semantically interpret behavioral patterns, detect deviations from target gameplay, and close the loop by adjusting game parameters to converge toward predefined design objectives. Our approach overcomes the limitations of static analysis. Empirical evaluation demonstrates that the LMM reliably interprets RL-generated trajectories and drives multiple rounds of parameter refinement, significantly improving gameplay consistency and objective attainment rates.

Technology Category

Application Category

πŸ“ Abstract
Game design hinges on understanding how static rules and content translate into dynamic player behavior - something modern generative systems that inspect only a game's code or assets struggle to capture. We present an automated design iteration framework that closes this gap by pairing a reinforcement learning (RL) agent, which playtests the game, with a large multimodal model (LMM), which revises the game based on what the agent does. In each loop the RL player completes several episodes, producing (i) numerical play metrics and/or (ii) a compact image strip summarising recent video frames. The LMM designer receives a gameplay goal and the current game configuration, analyses the play traces, and edits the configuration to steer future behaviour toward the goal. We demonstrate results that LMMs can reason over behavioral traces supplied by RL agents to iteratively refine game mechanics, pointing toward practical, scalable tools for AI-assisted game design.
Problem

Research questions and friction points this paper is trying to address.

Automating game design iteration using RL and LMM
Bridging gap between static rules and player behavior
Enhancing game mechanics via AI-driven analysis and edits
Innovation

Methods, ideas, or system contributions that make the work stand out.

RL agent playtests game iteratively
LMM analyzes traces and edits config
Multimodal feedback refines game mechanics
πŸ”Ž Similar Papers
No similar papers found.