Human-Level Competitive Pok'emon via Scalable Offline Reinforcement Learning with Transformers

📅 2025-04-06
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
解决《宝可梦》竞技游戏中基于不完全信息的策略学习问题,采用离线强化学习和Transformer模型,通过大规模数据集训练,实现超越现有方法的智能体表现。

Technology Category

Application Category

📝 Abstract
Competitive Pok'emon Singles (CPS) is a popular strategy game where players learn to exploit their opponent based on imperfect information in battles that can last more than one hundred stochastic turns. AI research in CPS has been led by heuristic tree search and online self-play, but the game may also create a platform to study adaptive policies trained offline on large datasets. We develop a pipeline to reconstruct the first-person perspective of an agent from logs saved from the third-person perspective of a spectator, thereby unlocking a dataset of real human battles spanning more than a decade that grows larger every day. This dataset enables a black-box approach where we train large sequence models to adapt to their opponent based solely on their input trajectory while selecting moves without explicit search of any kind. We study a progression from imitation learning to offline RL and offline fine-tuning on self-play data in the hardcore competitive setting of Pok'emon's four oldest (and most partially observed) game generations. The resulting agents outperform a recent LLM Agent approach and a strong heuristic search engine. While playing anonymously in online battles against humans, our best agents climb to rankings inside the top 10% of active players.
Problem

Research questions and friction points this paper is trying to address.

Develops AI for competitive Pokémon using offline reinforcement learning
Reconstructs agent perspective from spectator logs for training data
Trains models to adapt without explicit search, outperforming humans
Innovation

Methods, ideas, or system contributions that make the work stand out.

Reconstruct first-person perspective from spectator logs
Train large sequence models without explicit search
Combine imitation learning and offline RL
🔎 Similar Papers
No similar papers found.