🤖 AI Summary
Existing continuous affective image generation methods struggle to maintain both affective continuity and semantic affective fidelity, primarily due to the absence of dynamic affective feedback on generated images and adaptive prompt refinement mechanisms. To address this, we propose EmoFeedback2—a novel framework that introduces large vision-language models (LVLMs) into affective assessment for the first time, establishing a “generate–understand–feedback” reinforcement learning paradigm. Specifically, we design an LVLM-driven affective perception reward mechanism to quantitatively evaluate affective continuity, and propose a self-promoting textual feedback mechanism to dynamically align affective prompts with image semantics. By jointly leveraging LVLM fine-tuning, reinforcement learning optimization, and adaptive prompting, EmoFeedback2 significantly enhances affective controllability. Extensive experiments on our curated benchmark demonstrate that EmoFeedback2 outperforms state-of-the-art methods, generating high-fidelity images with superior affective continuity and semantic consistency.
📝 Abstract
Continuous emotional image generation (C-EICG) is emerging rapidly due to its ability to produce images aligned with both user descriptions and continuous emotional values. However, existing approaches lack emotional feedback from generated images, limiting the control of emotional continuity. Additionally, their simple alignment between emotions and naively generated texts fails to adaptively adjust emotional prompts according to image content, leading to insufficient emotional fidelity. To address these concerns, we propose a novel generation-understanding-feedback reinforcement paradigm (EmoFeedback2) for C-EICG, which exploits the reasoning capability of the fine-tuned large vision-language model (LVLM) to provide reward and textual feedback for generating high-quality images with continuous emotions. Specifically, we introduce an emotion-aware reward feedback strategy, where the LVLM evaluates the emotional values of generated images and computes the reward against target emotions, guiding the reinforcement fine-tuning of the generative model and enhancing the emotional continuity of images. Furthermore, we design a self-promotion textual feedback framework, in which the LVLM iteratively analyzes the emotional content of generated images and adaptively produces refinement suggestions for the next-round prompt, improving the emotional fidelity with fine-grained content. Extensive experimental results demonstrate that our approach effectively generates high-quality images with the desired emotions, outperforming existing state-of-the-art methods in our custom dataset. The code and dataset will be released soon.