What are they talking about? Benchmarking Large Language Models for Knowledge-Grounded Discussion Summarization

๐Ÿ“… 2025-05-18
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
This paper addresses the problem of observer misinterpretation in dialogue summarization caused by overreliance solely on conversational text, introducing for the first time the knowledge-augmented discussion summarization taskโ€”requiring models to jointly leverage dialogue content and external background knowledge to generate two distinct summaries: a context summary and a stance summary. Methodologically, the authors (1) formally define two standardized knowledge-grounded summarization schemas; (2) construct the first high-quality, human-annotated benchmark dataset; and (3) design a hierarchical, fine-grained, interpretable evaluation framework incorporating structured prompting, self-reflective reasoning, and multi-dimensional metrics. Experiments across 12 state-of-the-art large language models reveal average performance below 69%, exposing systemic deficiencies in background knowledge retrieval, knowledge integration, and stance synthesis. Moreover, results confirm that current models lack reliable self-assessment and self-correction capabilities.

Technology Category

Application Category

๐Ÿ“ Abstract
In this work, we investigate the performance of LLMs on a new task that requires combining discussion with background knowledge for summarization. This aims to address the limitation of outside observer confusion in existing dialogue summarization systems due to their reliance solely on discussion information. To achieve this, we model the task output as background and opinion summaries and define two standardized summarization patterns. To support assessment, we introduce the first benchmark comprising high-quality samples consistently annotated by human experts and propose a novel hierarchical evaluation framework with fine-grained, interpretable metrics. We evaluate 12 LLMs under structured-prompt and self-reflection paradigms. Our findings reveal: (1) LLMs struggle with background summary retrieval, generation, and opinion summary integration. (2) Even top LLMs achieve less than 69% average performance across both patterns. (3) Current LLMs lack adequate self-evaluation and self-correction capabilities for this task.
Problem

Research questions and friction points this paper is trying to address.

Evaluating LLMs for knowledge-grounded discussion summarization performance
Addressing observer confusion in dialogue summarization via background integration
Assessing LLM limitations in self-correction for structured summarization tasks
Innovation

Methods, ideas, or system contributions that make the work stand out.

Model task output as background and opinion summaries
Introduce first benchmark with expert-annotated samples
Propose hierarchical evaluation framework with fine-grained metrics
๐Ÿ”Ž Similar Papers
No similar papers found.
W
Weixiao Zhou
State Key Laboratory of Complex & Critical Software Environment, Beihang University
Junnan Zhu
Junnan Zhu
Institute of Automation Chinese Academy of Sciences
Natural Language Processing
G
Gengyao Li
State Key Laboratory of Multimodal Artificial Intelligence Systems, Institute of Automation, CAS; University of Chinese Academy of Sciences
X
Xianfu Cheng
State Key Laboratory of Complex & Critical Software Environment, Beihang University
Xinnian Liang
Xinnian Liang
Bytedance Inc.
Large Language Model
Feifei Zhai
Feifei Zhai
Institute of Automation, Chinese Academy of Sciences
Machine TranslationNatural Language ProcessingMachine Learning
Zhoujun Li
Zhoujun Li
Beihang University
Artificial IntelligentNatural Language ProcessingNetwork Security