BLooP: Zero-Shot Abstractive Summarization using Large Language Models with Bigram Lookahead Promotion

📅 2026-03-11
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Large language models often compromise faithfulness in zero-shot abstractive summarization by omitting critical information or introducing irrelevant content. This work proposes BLooP, a novel training-free approach that introduces a bigram lookahead prompting mechanism: during decoding, the model dynamically consults a bigram hash table constructed from the source document to guide generation toward more faithful summaries, without any model modification or additional training. Experimental results across multiple models and datasets demonstrate that BLooP significantly improves ROUGE and BARTScore metrics. Human evaluations further confirm its effectiveness in enhancing summary faithfulness while preserving high readability.

Technology Category

Application Category

📝 Abstract
Abstractive summarization requires models to generate summaries that convey information in the source document. While large language models can generate summaries without fine-tuning, they often miss key details and include extraneous information. We propose BLooP (Bigram Lookahead Promotion), a simple training-free decoding intervention that encourages large language models (LLMs) to generate tokens that form bigrams from the source document. BLooP operates through a hash table lookup at each decoding step, requiring no training, fine-tuning, or model modification. We demonstrate improvements in ROUGE and BARTScore for Llama-3.1-8B-Instruct, Mistral-Nemo-Instruct-2407, and Gemma-2-9b-it on CNN/DM, CCSum, Multi-News, and SciTLDR. Human evaluation shows that BLooP significantly improves faithfulness without reducing readability. We make the code available at https://github.com/varuniyer/BLooP
Problem

Research questions and friction points this paper is trying to address.

abstractive summarization
large language models
faithfulness
zero-shot
summary generation
Innovation

Methods, ideas, or system contributions that make the work stand out.

zero-shot summarization
bigram lookahead
training-free decoding
faithfulness
large language models
🔎 Similar Papers
No similar papers found.