On the Feasibility of In-Context Probing for Data Attribution

📅 2024-07-17
📈 Citations: 1
Influential: 0
📄 PDF
🤖 AI Summary
Gradient-based data attribution methods (e.g., influence functions) suffer from high computational cost and poor scalability. To address this, we propose a novel, gradient-free, low-cost data attribution paradigm that replaces conventional gradient computation with in-context probing (ICP) on large language models. We empirically demonstrate—across diverse NLP tasks and synthetic experiments—that ICP scores exhibit strong correlation (Spearman’s ρ > 0.85) with influence-function-based attributions. Moreover, fine-tuning models using top-ranked samples identified by either method yields comparable downstream performance gains. Crucially, ICP eliminates the need for backpropagation, drastically reducing computational overhead while preserving interpretability. This approach enables scalable, efficient, and transparent data provenance, offering a practical foundation for dataset curation, model diagnostics, and trustworthy AI development.

Technology Category

Application Category

📝 Abstract
Data attribution methods are used to measure the contribution of training data towards model outputs, and have several important applications in areas such as dataset curation and model interpretability. However, many standard data attribution methods, such as influence functions, utilize model gradients and are computationally expensive. In our paper, we show in-context probing (ICP) -- prompting a LLM -- can serve as a fast proxy for gradient-based data attribution for data selection under conditions contingent on data similarity. We study this connection empirically on standard NLP tasks, and show that ICP and gradient-based data attribution are well-correlated in identifying influential training data for tasks that share similar task type and content as the training data. Additionally, fine-tuning models on influential data selected by both methods achieves comparable downstream performance, further emphasizing their similarities. We also examine the connection between ICP and gradient-based data attribution using synthetic data on linear regression tasks. Our synthetic data experiments show similar results with those from NLP tasks, suggesting that this connection can be isolated in simpler settings, which offers a pathway to bridging their differences.
Problem

Research questions and friction points this paper is trying to address.

Evaluate ICP as a data attribution proxy
Compare ICP with gradient-based attribution methods
Test ICP on NLP and linear regression tasks
Innovation

Methods, ideas, or system contributions that make the work stand out.

In-context probing for data attribution
Fast proxy for gradient-based methods
Empirical study on NLP tasks
🔎 Similar Papers
No similar papers found.