LCFO: Long Context and Long Form Output Dataset and Benchmarking

📅 2024-12-11
🏛️ arXiv.org
📈 Citations: 2
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses two core challenges in long-text generation: *iterative summarization* and *summary expansion*. To this end, we propose LCFO—the first evaluation framework specifically designed for *controllable summary expansion*. LCFO comprises 5K-word cross-domain documents, multi-granularity abstractive summaries (at 5%, 10%, and 20% compression ratios), and fine-grained, alignment-annotated question-answer (QA) pairs. It introduces the first benchmark supporting multi-length summary generation and QA–summary domain alignment. Evaluation integrates human judgments, LLM-based automated assessment (e.g., GPT-4o-mini), and multidimensional metrics—including fluency, attribution fidelity, and expansion coherence. Experiments show that GPT-4o-mini outperforms the best automatic system by 10% on summarization and 20% on expansion; its short summaries even surpass human-written ones by 7% in quality. Automatic metrics achieve an overall correlation of 0.4 with human scores, rising to 0.6 on critical dimensions. LCFO establishes a reproducible, modular evaluation paradigm for controllable long-text generation.

Technology Category

Application Category

📝 Abstract
This paper presents the Long Context and Form Output (LCFO) benchmark, a novel evaluation framework for assessing gradual summarization and summary expansion capabilities across diverse domains. LCFO consists of long input documents (5k words average length), each of which comes with three summaries of different lengths (20%, 10%, and 5% of the input text), as well as approximately 15 questions and answers (QA) related to the input content. Notably, LCFO also provides alignments between specific QA pairs and corresponding summaries in 7 domains. The primary motivation behind providing summaries of different lengths is to establish a controllable framework for generating long texts from shorter inputs, i.e. summary expansion. To establish an evaluation metric framework for summarization and summary expansion, we provide human evaluation scores for human-generated outputs, as well as results from various state-of-the-art large language models (LLMs). GPT-4o-mini achieves best human scores among automatic systems in both summarization and summary expansion tasks (~ +10% and +20%, respectively). It even surpasses human output quality in the case of short summaries (~ +7%). Overall automatic metrics achieve low correlations with human evaluation scores (~ 0.4) but moderate correlation on specific evaluation aspects such as fluency and attribution (~ 0.6). The LCFO benchmark offers a standardized platform for evaluating summarization and summary expansion performance, as well as corresponding automatic metrics, thereby providing an important evaluation framework to advance generative AI.
Problem

Research questions and friction points this paper is trying to address.

Evaluating gradual summarization and summary expansion capabilities
Providing a benchmark with long documents and multi-length summaries
Assessing human and LLM performance in text generation tasks
Innovation

Methods, ideas, or system contributions that make the work stand out.

LCFO benchmark with multi-length summaries
Human and LLM evaluation metric framework
GPT-4o-mini outperforms in summarization tasks
🔎 Similar Papers
No similar papers found.
M
M. Costa-jussà
FAIR at Meta
P
Pierre Andrews
FAIR at Meta
M
Mariano Coria Meglioli
FAIR at Meta
J
Joy Chen
FAIR at Meta
J
Joe Chuang
FAIR at Meta
David Dale
David Dale
Meta AI
nlpnatural language understandingmachine translationtext style transferlow resource
C
C. Ropers
FAIR at Meta
A
Alex Mourachko
FAIR at Meta
E
Eduardo Sánchez
FAIR at Meta
Holger Schwenk
Holger Schwenk
Research scientist, Facebook, and professor of Computer Science
NLPstatistical machine translationmachine learningneural networks
T
Tuan Tran
FAIR at Meta
A
Arina Turkatenko
FAIR at Meta
C
Carleigh Wood
FAIR at Meta