Attribution, Citation, and Quotation: A Survey of Evidence-based Text Generation with Large Language Models

📅 2025-08-21
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
The evidence-based text generation (EBTG) field suffers from terminological ambiguity, fragmented evaluation practices, and the absence of standardized benchmarks. Method: We conduct a systematic review of 134 papers and propose the first unified taxonomy covering three core operations—citation, attribution, and quotation—structured along seven dimensions (e.g., evidence source, granularity, explicitness). Concurrently, we design a multidimensional evaluation framework that integrates and standardizes 300 metrics, and introduce the first reproducible, comparable EBTG benchmark. Contribution/Results: Our work resolves long-standing fragmentation in EBTG research, establishes a rigorous foundation for assessing output traceability, verifiability, and trustworthiness, and provides both theoretical guidance and practical tools to advance reliable large language model development.

Technology Category

Application Category

📝 Abstract
The increasing adoption of large language models (LLMs) has been accompanied by growing concerns regarding their reliability and trustworthiness. As a result, a growing body of research focuses on evidence-based text generation with LLMs, aiming to link model outputs to supporting evidence to ensure traceability and verifiability. However, the field is fragmented due to inconsistent terminology, isolated evaluation practices, and a lack of unified benchmarks. To bridge this gap, we systematically analyze 134 papers, introduce a unified taxonomy of evidence-based text generation with LLMs, and investigate 300 evaluation metrics across seven key dimensions. Thereby, we focus on approaches that use citations, attribution, or quotations for evidence-based text generation. Building on this, we examine the distinctive characteristics and representative methods in the field. Finally, we highlight open challenges and outline promising directions for future work.
Problem

Research questions and friction points this paper is trying to address.

Addressing inconsistent terminology in evidence-based text generation
Analyzing fragmented evaluation practices across 300 metrics
Establishing unified taxonomy for attribution and citation methods
Innovation

Methods, ideas, or system contributions that make the work stand out.

Systematic analysis of 134 research papers
Unified taxonomy for evidence-based text generation
Investigation of 300 evaluation metrics
🔎 Similar Papers
No similar papers found.