🤖 AI Summary
Large language models (LLMs) exhibit limited precision in controlling numerical attributes—such as summary length and extractiveness—in controllable summarization, hindering practical deployment aligned with user preferences. To address this, we propose a Guided Reflection Framework featuring a novel two-stage self-reflective mechanism: “Guide–Explain.” First, attribute-aware bias detection identifies deviations between the initial summary and target constraints; second, an attributional error explanation is generated to guide conditional regeneration. Our approach integrates self-reflective prompt engineering with multi-attribute joint constraints, significantly improving control fidelity and optimization efficiency. Experiments on multidimensional controllable summarization demonstrate substantial gains: constraint satisfaction rates increase markedly, and average iteration counts decrease by over 40% compared to state-of-the-art pure-LLM iterative baselines.
📝 Abstract
Recently, large language models (LLMs) have demonstrated remarkable performance in abstractive summarization tasks. However, controllable summarization with LLMs remains underexplored, limiting their ability to generate summaries that align with specific user preferences. In this paper, we first investigate the capability of LLMs to control diverse attributes, revealing that they encounter greater challenges with numerical attributes, such as length and extractiveness, compared to linguistic attributes. To address this challenge, we propose a guide-to-explain framework (GTE) for controllable summarization. Our GTE framework enables the model to identify misaligned attributes in the initial draft and guides it in explaining errors in the previous output. Based on this reflection, the model generates a well-adjusted summary. As a result, by allowing the model to reflect on its misalignment, we generate summaries that satisfy the desired attributes in surprisingly fewer iterations than other iterative methods solely using LLMs.