🤖 AI Summary
Reward signal design in reinforcement learning for software engineering tasks remains challenging due to the inherently multi-faceted nature of objectives, which often rely on proxy metrics such as successful compilation or test passage rather than a single scalar value. Moreover, existing research lacks systematic integration. This work presents the first comprehensive survey on reward engineering for reinforcement learning in software engineering, focusing on representative tasks including code generation, repair, and testing. We propose a three-dimensional taxonomy encompassing reward sources, formulations, and optimization strategies. Through structured analysis of current approaches, we distill common reward design patterns, clarify the state of the field, and identify key challenges and future directions, thereby offering both theoretical grounding and practical guidance for advancing this emerging area.
📝 Abstract
Reinforcement learning is increasingly used for code-centric tasks. These tasks include code generation, summarization, understanding, repair, testing, and optimization. This trend is growing faster with large language models and autonomous agents. A key challenge is how to design reward signals that make sense for software. In many RL problems, the reward is a clear number. In software, this is often not possible. The goal is rarely a single numeric objective. Instead, rewards are usually proxies. Common proxies check if the code compiles, passes tests, or satisfies quality metrics. Many reward designs have been proposed for code-related tasks. However, the work is scattered across areas and papers. There is no single survey that brings these approaches together and shows the full landscape of reward design for RL in software. In this survey, we provide the first systematic and comprehensive review of reward engineering for RL in software tasks. We focus on existing methods and techniques. We structure the literature along three complementary dimensions, summarizing the reward-design choices within each. We conclude with challenges and recommendations in the reward design space for SE tasks.