Ctrl-U: Robust Conditional Image Generation via Uncertainty-aware Reward Modeling

📅 2024-10-15
🏛️ arXiv.org
📈 Citations: 1
Influential: 0
📄 PDF
🤖 AI Summary
In instruction-driven conditional image generation, inaccurate reward model feedback leads to semantic misalignment and generation distortion. To address this, we propose an uncertainty-aware reward modeling framework. Our approach is the first to explicitly model the predictive variance of reward estimates as a principled uncertainty metric, enabling dynamic loss reweighting that effectively suppresses noise from unreliable feedback. Furthermore, we introduce a consistency constraint during reward model fine-tuning to enhance its generalization capability and robustness. Evaluated across diverse tasks—including text-to-image and layout-to-image generation—our method significantly improves conditional controllability and visual fidelity. It consistently outperforms existing approaches, achieving state-of-the-art performance on multiple benchmarks.

Technology Category

Application Category

📝 Abstract
In this paper, we focus on the task of conditional image generation, where an image is synthesized according to user instructions. The critical challenge underpinning this task is ensuring both the fidelity of the generated images and their semantic alignment with the provided conditions. To tackle this issue, previous studies have employed supervised perceptual losses derived from pre-trained models, i.e., reward models, to enforce alignment between the condition and the generated result. However, we observe one inherent shortcoming: considering the diversity of synthesized images, the reward model usually provides inaccurate feedback when encountering newly generated data, which can undermine the training process. To address this limitation, we propose an uncertainty-aware reward modeling, called Ctrl-U, including uncertainty estimation and uncertainty-aware regularization, designed to reduce the adverse effects of imprecise feedback from the reward model. Given the inherent cognitive uncertainty within reward models, even images generated under identical conditions often result in a relatively large discrepancy in reward loss. Inspired by the observation, we explicitly leverage such prediction variance as an uncertainty indicator. Based on the uncertainty estimation, we regularize the model training by adaptively rectifying the reward. In particular, rewards with lower uncertainty receive higher loss weights, while those with higher uncertainty are given reduced weights to allow for larger variability. The proposed uncertainty regularization facilitates reward fine-tuning through consistency construction. Extensive experiments validate the effectiveness of our methodology in improving the controllability and generation quality, as well as its scalability across diverse conditional scenarios. Codes are publicly available at https://grenoble-zhang.github.io/Ctrl-U-Page/.
Problem

Research questions and friction points this paper is trying to address.

Ensuring fidelity in conditional image generation
Improving semantic alignment with user instructions
Reducing adverse effects of imprecise reward feedback
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uncertainty-aware reward modeling
Adaptive reward regularization
Consistency-based reward fine-tuning
🔎 Similar Papers
No similar papers found.