🤖 AI Summary
This work addresses the common trade-off in large language model alignment, where improvements in output quality often come at the expense of reduced diversity. To tackle this issue, the authors propose a novel framework—Quality-constrained Entropy Maximization Policy Optimization (QEMPO)—which decouples the alignment objective into distinct quality and diversity distributions for the first time. QEMPO maximizes policy entropy under explicit quality constraints, thereby enhancing response diversity without compromising output quality. The approach integrates both online and offline policy optimization and employs a flexible mechanism to enforce quality requirements during generation. Experimental results demonstrate that QEMPO not only preserves or even surpasses the performance of reinforcement learning from human feedback (RLHF) in terms of quality but also significantly improves the diversity of model outputs.
📝 Abstract
Recent research indicates that while alignment methods significantly improve the quality of large language model(LLM) outputs, they simultaneously reduce the diversity of the models' output. Although some methods have been proposed to enhance LLM output diversity, they often come at the cost of reduced performance. In this work, we first theoretically demonstrate that the alignment task can be decomposed into two distributions: quality and diversity. To enhance the diversity of LLM outputs while ensuring quality, we propose the Quality-constrained Entropy Maximization Policy Optimization (QEMPO). QEMPO aims to maximize the output entropy of the policy while ensuring output quality. By adding different constraints to QEMPO, we obtain different policies. To optimize policies, we propose both online and offline training methods. Experiments validate that QEMPO achieves performance comparable to or even better than RLHF while improving output diversity.