EmoFeedback2: Reinforcement of Continuous Emotional Image Generation via LVLM-based Reward and Textual Feedback

📅 2025-11-25
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing continuous affective image generation methods struggle to maintain both affective continuity and semantic affective fidelity, primarily due to the absence of dynamic affective feedback on generated images and adaptive prompt refinement mechanisms. To address this, we propose EmoFeedback2—a novel framework that introduces large vision-language models (LVLMs) into affective assessment for the first time, establishing a “generate–understand–feedback” reinforcement learning paradigm. Specifically, we design an LVLM-driven affective perception reward mechanism to quantitatively evaluate affective continuity, and propose a self-promoting textual feedback mechanism to dynamically align affective prompts with image semantics. By jointly leveraging LVLM fine-tuning, reinforcement learning optimization, and adaptive prompting, EmoFeedback2 significantly enhances affective controllability. Extensive experiments on our curated benchmark demonstrate that EmoFeedback2 outperforms state-of-the-art methods, generating high-fidelity images with superior affective continuity and semantic consistency.

Technology Category

Application Category

📝 Abstract
Continuous emotional image generation (C-EICG) is emerging rapidly due to its ability to produce images aligned with both user descriptions and continuous emotional values. However, existing approaches lack emotional feedback from generated images, limiting the control of emotional continuity. Additionally, their simple alignment between emotions and naively generated texts fails to adaptively adjust emotional prompts according to image content, leading to insufficient emotional fidelity. To address these concerns, we propose a novel generation-understanding-feedback reinforcement paradigm (EmoFeedback2) for C-EICG, which exploits the reasoning capability of the fine-tuned large vision-language model (LVLM) to provide reward and textual feedback for generating high-quality images with continuous emotions. Specifically, we introduce an emotion-aware reward feedback strategy, where the LVLM evaluates the emotional values of generated images and computes the reward against target emotions, guiding the reinforcement fine-tuning of the generative model and enhancing the emotional continuity of images. Furthermore, we design a self-promotion textual feedback framework, in which the LVLM iteratively analyzes the emotional content of generated images and adaptively produces refinement suggestions for the next-round prompt, improving the emotional fidelity with fine-grained content. Extensive experimental results demonstrate that our approach effectively generates high-quality images with the desired emotions, outperforming existing state-of-the-art methods in our custom dataset. The code and dataset will be released soon.
Problem

Research questions and friction points this paper is trying to address.

Lack emotional feedback from generated images limiting emotional continuity control
Insufficient emotional fidelity due to poor emotion-text alignment adaptation
Need to generate images with continuous emotional values and high fidelity
Innovation

Methods, ideas, or system contributions that make the work stand out.

LVLM-based reward feedback for emotional continuity
Self-promotion textual feedback for emotional fidelity
Fine-tuned LVLM provides iterative refinement suggestions
🔎 Similar Papers
J
Jingyang Jia
University of Science and Technology of China
Kai Shu
Kai Shu
Assistant Professor of Computer Science, Emory University
Data MiningTrustworthy AISocial ComputingMachine LearningAI Safety
G
Gang Yang
University of Science and Technology of China
Long Xing
Long Xing
University of Science and Technology of China
X
Xun Chen
University of Science and Technology of China
A
Aiping Liu
University of Science and Technology of China