Human-Feedback Efficient Reinforcement Learning for Online Diffusion Model Finetuning

πŸ“… 2024-10-07
πŸ›οΈ arXiv.org
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
Existing online fine-tuning methods for Stable Diffusion suffer from inefficient human feedback utilization, reliance on predefined reward functions, or large-scale offline-pretrained reward models. Method: This paper proposes HEROβ€”a human-feedback-driven optimization framework that requires neither predefined reward functions nor offline reward models, instead leveraging real-time human feedback directly. HERO introduces two novel mechanisms: (i) feedback-aligned representation learning, which models human intent in the latent space, and (ii) feedback-guided image generation, enabling efficient convergence. Contribution/Results: HERO achieves multi-task generalization with only 500 online feedback instances. In body anomaly correction, it improves feedback efficiency by 4Γ—. It also significantly outperforms baseline methods on reasoning, counting, personalized generation, and NSFW content suppression tasks.

Technology Category

Application Category

πŸ“ Abstract
Controllable generation through Stable Diffusion (SD) fine-tuning aims to improve fidelity, safety, and alignment with human guidance. Existing reinforcement learning from human feedback methods usually rely on predefined heuristic reward functions or pretrained reward models built on large-scale datasets, limiting their applicability to scenarios where collecting such data is costly or difficult. To effectively and efficiently utilize human feedback, we develop a framework, HERO, which leverages online human feedback collected on the fly during model learning. Specifically, HERO features two key mechanisms: (1) Feedback-Aligned Representation Learning, an online training method that captures human feedback and provides informative learning signals for fine-tuning, and (2) Feedback-Guided Image Generation, which involves generating images from SD's refined initialization samples, enabling faster convergence towards the evaluator's intent. We demonstrate that HERO is 4x more efficient in online feedback for body part anomaly correction compared to the best existing method. Additionally, experiments show that HERO can effectively handle tasks like reasoning, counting, personalization, and reducing NSFW content with only 0.5K online feedback.
Problem

Research questions and friction points this paper is trying to address.

Improves fidelity, safety, and alignment in Stable Diffusion fine-tuning.
Reduces reliance on costly predefined reward models or datasets.
Enhances efficiency in online human feedback utilization.
Innovation

Methods, ideas, or system contributions that make the work stand out.

Online human feedback for model fine-tuning
Feedback-Aligned Representation Learning mechanism
Feedback-Guided Image Generation for faster convergence
πŸ”Ž Similar Papers
No similar papers found.