Quality-constrained Entropy Maximization Policy Optimization for LLM Diversity

📅 2026-02-11
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the common trade-off in large language model alignment, where improvements in output quality often come at the expense of reduced diversity. To tackle this issue, the authors propose a novel framework—Quality-constrained Entropy Maximization Policy Optimization (QEMPO)—which decouples the alignment objective into distinct quality and diversity distributions for the first time. QEMPO maximizes policy entropy under explicit quality constraints, thereby enhancing response diversity without compromising output quality. The approach integrates both online and offline policy optimization and employs a flexible mechanism to enforce quality requirements during generation. Experimental results demonstrate that QEMPO not only preserves or even surpasses the performance of reinforcement learning from human feedback (RLHF) in terms of quality but also significantly improves the diversity of model outputs.

Technology Category

Application Category

📝 Abstract
Recent research indicates that while alignment methods significantly improve the quality of large language model(LLM) outputs, they simultaneously reduce the diversity of the models' output. Although some methods have been proposed to enhance LLM output diversity, they often come at the cost of reduced performance. In this work, we first theoretically demonstrate that the alignment task can be decomposed into two distributions: quality and diversity. To enhance the diversity of LLM outputs while ensuring quality, we propose the Quality-constrained Entropy Maximization Policy Optimization (QEMPO). QEMPO aims to maximize the output entropy of the policy while ensuring output quality. By adding different constraints to QEMPO, we obtain different policies. To optimize policies, we propose both online and offline training methods. Experiments validate that QEMPO achieves performance comparable to or even better than RLHF while improving output diversity.
Problem

Research questions and friction points this paper is trying to address.

LLM diversity
quality alignment
output entropy
policy optimization
Innovation

Methods, ideas, or system contributions that make the work stand out.

Entropy Maximization
Policy Optimization
Output Diversity
Quality Constraint
Large Language Models
🔎 Similar Papers
No similar papers found.
H
Haihui Pan
Zuoyebang Education Technology
Y
Yuzhong Hong
Zuoyebang Education Technology
S
Shaoke Lv
Zuoyebang Education Technology
Junwei Bao
Junwei Bao
zuoyebang.com // JD.com // MSRA
NLPLLMQA+DialogGeneration
H
Hongfei Jiang
Zuoyebang Education Technology
Y
Yang Song
Zuoyebang Education Technology