DATE-LM: Benchmarking Data Attribution Evaluation for Large Language Models

📅 2025-07-12
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing data attribution methods lack systematic, scenario-driven evaluation tailored to large language models (LLMs). To address this, we propose DATE-LM—the first LLM-centric, unified benchmark for data attribution—covering three realistic tasks: training data selection, toxicity/bias filtering, and factual attribution. It supports heterogeneous model architectures and plug-and-play evaluation. DATE-LM integrates mainstream techniques—including influence functions, gradient-based溯源, and feature attribution—and conducts large-scale empirical analysis across diverse settings. Results reveal that no single method dominates across all tasks; most attribution approaches perform comparably to simple baselines; and effectiveness is highly contingent on task-specific design choices. DATE-LM uncovers fundamental trade-offs and limitations of current methods, establishes the first public leaderboard, and provides a reproducible, scalable evaluation paradigm for data attribution research.

Technology Category

Application Category

📝 Abstract
Data attribution methods quantify the influence of training data on model outputs and are becoming increasingly relevant for a wide range of LLM research and applications, including dataset curation, model interpretability, data valuation. However, there remain critical gaps in systematic LLM-centric evaluation of data attribution methods. To this end, we introduce DATE-LM (Data Attribution Evaluation in Language Models), a unified benchmark for evaluating data attribution methods through real-world LLM applications. DATE-LM measures attribution quality through three key tasks -- training data selection, toxicity/bias filtering, and factual attribution. Our benchmark is designed for ease of use, enabling researchers to configure and run large-scale evaluations across diverse tasks and LLM architectures. Furthermore, we use DATE-LM to conduct a large-scale evaluation of existing data attribution methods. Our findings show that no single method dominates across all tasks, data attribution methods have trade-offs with simpler baselines, and method performance is sensitive to task-specific evaluation design. Finally, we release a public leaderboard for quick comparison of methods and to facilitate community engagement. We hope DATE-LM serves as a foundation for future data attribution research in LLMs.
Problem

Research questions and friction points this paper is trying to address.

Evaluate data attribution methods for LLMs systematically
Assess influence of training data on model outputs
Benchmark performance across diverse tasks and architectures
Innovation

Methods, ideas, or system contributions that make the work stand out.

Introduces DATE-LM benchmark for data attribution
Evaluates methods via real-world LLM applications
Public leaderboard for method comparison
🔎 Similar Papers
No similar papers found.