🤖 AI Summary
Text summarization requires simultaneous optimization of multiple objectives—consistency, coherence, relevance, and fluency—yet existing LLM-based reinforcement learning (RL) approaches lack dynamic modeling of trade-offs among these objectives. To address this, we propose Hypervolume Optimization Reinforcement Learning (HVO), the first framework to incorporate the hypervolume metric into multi-objective abstractive summarization with large language models. HVO constructs a multi-dimensional reward function grounded in fine-grained evaluation metrics and employs group-level policy optimization to dynamically approximate the Pareto frontier. Experiments on multiple benchmark datasets demonstrate that HVO significantly outperforms GRPO, achieving more balanced performance across all objectives. Notably, a 7B-parameter model optimized via HVO attains overall summarization quality comparable to GPT-4 while generating more concise outputs.
📝 Abstract
Text summarization is a crucial task that requires the simultaneous optimization of multiple objectives, including consistency, coherence, relevance, and fluency, which presents considerable challenges. Although large language models (LLMs) have demonstrated remarkable performance, enhanced by reinforcement learning (RL), few studies have focused on optimizing the multi-objective problem of summarization through RL based on LLMs. In this paper, we introduce hypervolume optimization (HVO), a novel optimization strategy that dynamically adjusts the scores between groups during the reward process in RL by using the hypervolume method. This method guides the model's optimization to progressively approximate the pareto front, thereby generating balanced summaries across multiple objectives. Experimental results on several representative summarization datasets demonstrate that our method outperforms group relative policy optimization (GRPO) in overall scores and shows more balanced performance across different dimensions. Moreover, a 7B foundation model enhanced by HVO performs comparably to GPT-4 in the summarization task, while maintaining a shorter generation length. Our code is publicly available at https://github.com/ai4business-LiAuto/HVO.git