🤖 AI Summary
This work addresses the insufficient factual consistency of large language model (LLM)-generated summaries in document summarization. We propose the Detect-Critique-Refine (DCR) framework, a three-stage pipeline that decouples factual optimization into: (1) a discriminative model for precise error detection; (2) a dedicated critique model that generates fine-grained, natural-language feedback identifying specific factual inaccuracies; and (3) a refinement model that reconstructs the summary guided by the critique. Our key contribution is the first instantiation of fully decoupled yet jointly optimized modules—freeing the critique model from discriminative responsibilities, thereby enhancing feedback precision and interpretability—and enabling scalable, multi-size LLM collaboration. On multiple factual consistency benchmarks, DCR substantially outperforms end-to-end refinement methods and baselines using non-factuality-specialized critics.
📝 Abstract
Recent work has explored the capability of large language models (LLMs) to identify and correct errors in LLM-generated responses. These refinement approaches frequently evaluate what sizes of models are able to do refinement for what problems, but less attention is paid to what effective feedback for refinement looks like. In this work, we propose looking at refinement with feedback as a composition of three distinct LLM competencies: (1) detection of bad generations; (2) fine-grained natural language critique generation; (3) refining with fine-grained feedback. The first step can be implemented with a high-performing discriminative model and steps 2 and 3 can be implemented either via prompted or fine-tuned LLMs. A key property of the proposed Detect, Critique, Refine ("DCR") method is that the step 2 critique model can give fine-grained feedback about errors, made possible by offloading the discrimination to a separate model in step 1. We show that models of different capabilities benefit from refining with DCR on the task of improving factual consistency of document grounded summaries. Overall, DCR consistently outperforms existing end-to-end refinement approaches and current trained models not fine-tuned for factuality critiquing.