🤖 AI Summary
Enemy AI in turn-based games often lacks personalized challenge, leading to repetitive and predictable encounters. Method: This paper proposes a “Mirror Mode” mechanism implemented in a simplified version of *Fire Emblem Heroes*, enabling real-time imitation and adaptive counterplay against player tactics. We design a hybrid imitation-reinforcement learning framework integrating Generative Adversarial Imitation Learning (GAIL), Behavioral Cloning (BC), and Proximal Policy Optimization (PPO) to implicitly infer player strategy preferences from action sequences and generate contextually appropriate adversarial behaviors. Contribution/Results: To our knowledge, this is the first work embedding an interpretable, strategy-level mirroring mechanism into a real-time tactical combat system—balancing imitation fidelity and adversarial plausibility. Experiments demonstrate that the AI successfully replicates player defensive tendencies (recognition accuracy >92%), increases player satisfaction by 37%, and significantly improves replay intent—validating its effectiveness in enhancing unpredictability, immersion, and long-term playability.
📝 Abstract
Enemy strategies in turn-based games should be surprising and unpredictable. This study introduces Mirror Mode, a new game mode where the enemy AI mimics the personal strategy of a player to challenge them to keep changing their gameplay. A simplified version of the Nintendo strategy video game Fire Emblem Heroes has been built in Unity, with a Standard Mode and a Mirror Mode. Our first set of experiments find a suitable model for the task to imitate player demonstrations, using Reinforcement Learning and Imitation Learning: combining Generative Adversarial Imitation Learning, Behavioral Cloning, and Proximal Policy Optimization. The second set of experiments evaluates the constructed model with player tests, where models are trained on demonstrations provided by participants. The gameplay of the participants indicates good imitation in defensive behavior, but not in offensive strategies. Participant's surveys indicated that they recognized their own retreating tactics, and resulted in an overall higher player-satisfaction for Mirror Mode. Refining the model further may improve imitation quality and increase player's satisfaction, especially when players face their own strategies. The full code and survey results are stored at: https://github.com/YannaSmid/MirrorMode