π€ AI Summary
This work addresses Semantic Overlap Summarization (SOS), a novel constrained multi-document summarization task requiring precise extraction of shared semantics between two alternative narratives. We systematically evaluate 15 state-of-the-art large language models (LLMs) on SOS. To this end, we propose the first dedicated LLM evaluation framework for SOS, comprising a curated, open-source benchmark dataset; a structured prompting template system grounded in the TELeR taxonomy; and a multi-dimensional evaluation metric integrating ROUGE, BERTScore, and our novel SEM-F1βdesigned to quantify semantic overlap fidelity. Experimental results reveal substantial performance disparities across LLMs and consistent bottlenecks: weak logical consistency and insufficient fine-grained semantic alignment in overlap extraction. Our contributions include a reproducible benchmark, methodological tools, and open resources to advance research on constrained summarization.
π Abstract
Semantic Overlap Summarization (SOS) is a constrained multi-document summarization task, where the constraint is to capture the common/overlapping information between two alternative narratives. While recent advancements in Large Language Models (LLMs) have achieved superior performance in numerous summarization tasks, a benchmarking study of the SOS task using LLMs is yet to be performed. As LLMs' responses are sensitive to slight variations in prompt design, a major challenge in conducting such a benchmarking study is to systematically explore a variety of prompts before drawing a reliable conclusion. Fortunately, very recently, the TELeR taxonomy has been proposed which can be used to design and explore various prompts for LLMs. Using this TELeR taxonomy and 15 popular LLMs, this paper comprehensively evaluates LLMs on the SOS Task, assessing their ability to summarize overlapping information from multiple alternative narratives. For evaluation, we report well-established metrics like ROUGE, BERTscore, and SEM-F1$ on two different datasets of alternative narratives. We conclude the paper by analyzing the strengths and limitations of various LLMs in terms of their capabilities in capturing overlapping information The code and datasets used to conduct this study are available at https://anonymous.4open.science/r/llm_eval-E16D.