Learning to Summarize from LLM-generated Feedback

📅 2024-10-17
🏛️ arXiv.org
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address hallucination, critical information omission, and redundancy in large language model (LLM) summarization, this paper proposes a multidimensional fine-grained feedback-driven summarization optimization framework. Methodologically, it introduces (1) FeedSum—the first large-scale, multidimensional summarization feedback dataset with fine-grained annotations across faithfulness, completeness, and conciseness; (2) a collaborative influence mechanism modeling the interplay among feedback quality, dimensionality, and granularity; and (3) a hybrid training strategy integrating supervised fine-tuning (SFT) and direct preference optimization (DPO) on the Llama3 architecture. The resulting SummLlama3-8B model—despite its significantly smaller parameter count—outperforms the 70B Llama3-instruct baseline in human evaluations across all three core metrics: faithfulness, completeness, and conciseness. This demonstrates that carefully structured feedback and targeted optimization enable compact models to surpass larger counterparts in preference-aligned summarization performance.

Technology Category

Application Category

📝 Abstract
Developing effective text summarizers remains a challenge due to issues like hallucinations, key information omissions, and verbosity in LLM-generated summaries. This work explores using LLM-generated feedback to improve summary quality by aligning the summaries with human preferences for faithfulness, completeness, and conciseness. We introduce FeedSum, a large-scale dataset containing multi-dimensional LLM feedback on summaries of varying quality across diverse domains. Our experiments show how feedback quality, dimensionality, and granularity influence preference learning, revealing that high-quality, multi-dimensional, fine-grained feedback significantly improves summary generation. We also compare two methods for using this feedback: supervised fine-tuning and direct preference optimization. Finally, we introduce SummLlama3-8b, a model that outperforms the nearly 10x larger Llama3-70b-instruct in generating human-preferred summaries, demonstrating that smaller models can achieve superior performance with appropriate training. The full dataset and SummLlama3-8B model are available at https://huggingface.co/datasets/DISLab/FeedSum and https://huggingface.co/DISLab/SummLlama3-8B.
Problem

Research questions and friction points this paper is trying to address.

Large Model
Information Inaccuracy
Redundancy
Innovation

Methods, ideas, or system contributions that make the work stand out.

Feedback-enhanced Summarization
SummLlama3-8b Model
Fine-tuning Strategies
🔎 Similar Papers
No similar papers found.