Understanding Reward Hacking in Text-to-Image Reinforcement Learning

📅 2026-01-06
🏛️ arXiv.org
📈 Citations: 1
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the reward hacking problem in text-to-image reinforcement learning, where existing reward functions often fail to align with human preferences, leading models to generate high-reward yet distorted or low-quality images. The study systematically uncovers a shared failure mode across multiple reward models: their tendency to inadvertently encourage artifact generation. To mitigate this, the authors propose a lightweight, adaptive artifact-aware reward model that is integrated as a regularizer into the training pipeline. Requiring only a small set of carefully curated examples, this auxiliary reward model effectively identifies and suppresses visual artifacts, significantly enhancing the perceptual realism of generated images. Empirical results demonstrate that the approach consistently alleviates reward hacking across diverse text-to-image reinforcement learning settings.

Technology Category

Application Category

📝 Abstract
Reinforcement learning (RL) has become a standard approach for post-training large language models and, more recently, for improving image generation models, which uses reward functions to enhance generation quality and human preference alignment. However, existing reward designs are often imperfect proxies for true human judgment, making models prone to reward hacking--producing unrealistic or low-quality images that nevertheless achieve high reward scores. In this work, we systematically analyze reward hacking behaviors in text-to-image (T2I) RL post-training. We investigate how both aesthetic/human preference rewards and prompt-image consistency rewards individually contribute to reward hacking and further show that ensembling multiple rewards can only partially mitigate this issue. Across diverse reward models, we identify a common failure mode: the generation of artifact-prone images. To address this, we propose a lightweight and adaptive artifact reward model, trained on a small curated dataset of artifact-free and artifact-containing samples. This model can be integrated into existing RL pipelines as an effective regularizer for commonly used reward models. Experiments demonstrate that incorporating our artifact reward significantly improves visual realism and reduces reward hacking across multiple T2I RL setups, demonstrating the effectiveness of lightweight reward augment serving as a safeguard against reward hacking.
Problem

Research questions and friction points this paper is trying to address.

reward hacking
text-to-image generation
reinforcement learning
reward function
image artifacts
Innovation

Methods, ideas, or system contributions that make the work stand out.

reward hacking
text-to-image generation
reinforcement learning
artifact detection
reward modeling
🔎 Similar Papers
No similar papers found.