PCGRLLM: Large Language Model-Driven Reward Design for Procedural Content Generation Reinforcement Learning

📅 2025-02-15
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Manually designing reward functions for game AI is costly and heavily reliant on domain expertise. Method: This paper proposes a novel LLM-based approach for automatically generating reward functions for procedural content generation tasks. Its core innovation is a feedback-driven, reasoning-oriented prompt engineering framework that integrates zero-shot LLM inference, multi-step chain-of-thought prompting, dynamic feedback calibration, and two-dimensional story–reward mapping modeling—enabling end-to-end translation from natural-language story descriptions to executable reward functions. Contribution/Results: The method substantially reduces human intervention and enhances creative autonomy. Experiments demonstrate a 415% improvement over strong baselines in zero-shot settings and a 40% gain over general-purpose models on story-to-reward generation, validating its generalizability and practical utility.

Technology Category

Application Category

📝 Abstract
Reward design plays a pivotal role in the training of game AIs, requiring substantial domain-specific knowledge and human effort. In recent years, several studies have explored reward generation for training game agents and controlling robots using large language models (LLMs). In the content generation literature, there has been early work on generating reward functions for reinforcement learning agent generators. This work introduces PCGRLLM, an extended architecture based on earlier work, which employs a feedback mechanism and several reasoning-based prompt engineering techniques. We evaluate the proposed method on a story-to-reward generation task in a two-dimensional environment using two state-of-the-art LLMs, demonstrating the generalizability of our approach. Our experiments provide insightful evaluations that demonstrate the capabilities of LLMs essential for content generation tasks. The results highlight significant performance improvements of 415% and 40% respectively, depending on the zero-shot capabilities of the language model. Our work demonstrates the potential to reduce human dependency in game AI development, while supporting and enhancing creative processes.
Problem

Research questions and friction points this paper is trying to address.

Automating reward design in game AI
Reducing human effort in content generation
Enhancing reinforcement learning with LLMs
Innovation

Methods, ideas, or system contributions that make the work stand out.

LLM-driven reward design
Feedback mechanism integration
Reasoning-based prompt engineering
🔎 Similar Papers
No similar papers found.