LoRIF: Low-Rank Influence Functions for Scalable Training Data Attribution

📅 2026-01-29
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the scalability challenges in training data attribution for large-scale models, where high storage overhead and query latency—particularly under high-dimensional projections—hinder the trade-off between efficiency and attribution quality. The authors propose LoRIF, a novel method that introduces low-rank decomposition into influence function computation. By leveraging low-rank factor storage, truncated singular value decomposition (SVD), and the Woodbury identity, LoRIF efficiently approximates the Hessian matrix, substantially alleviating I/O and memory burdens. Evaluated across models ranging from 0.1B to 70B parameters, LoRIF achieves up to 20× storage compression and query acceleration compared to LoGRA, while maintaining or even improving attribution accuracy—thereby breaking the longstanding trade-off between attribution fidelity and scalability.

Technology Category

Application Category

📝 Abstract
Training data attribution (TDA) identifies which training examples most influenced a model's prediction. The best-performing TDA methods exploits gradients to define an influence function. To overcome the scalability challenge arising from gradient computation, the most popular strategy is random projection (e.g., TRAK, LoGRA). However, this still faces two bottlenecks when scaling to large training sets and high-quality attribution: \emph{(i)} storing and loading projected per-example gradients for all $N$ training examples, where query latency is dominated by I/O; and \emph{(ii)} forming the $D \times D$ inverse Hessian approximation, which costs $O(D^2)$ memory. Both bottlenecks scale with the projection dimension $D$, yet increasing $D$ is necessary for attribution quality -- creating a quality-scalability tradeoff. We introduce \textbf{LoRIF (Low-Rank Influence Functions)}, which exploits low-rank structures of gradient to address both bottlenecks. First, we store rank-$c$ factors of the projected per-example gradients rather than full matrices, reducing storage and query-time I/O from $O(D)$ to $O(c\sqrt{D})$ per layer per sample. Second, we use truncated SVD with the Woodbury identity to approximate the Hessian term in an $r$-dimensional subspace, reducing memory from $O(D^2)$ to $O(Dr)$. On models from 0.1B to 70B parameters trained on datasets with millions of examples, LoRIF achieves up to 20$\times$ storage reduction and query-time speedup compared to LoGRA, while matching or exceeding its attribution quality. LoRIF makes gradient-based TDA practical at frontier scale.
Problem

Research questions and friction points this paper is trying to address.

Training Data Attribution
Scalability
Influence Functions
Gradient Storage
Hessian Approximation
Innovation

Methods, ideas, or system contributions that make the work stand out.

Low-Rank Influence Functions
Training Data Attribution
Scalable Attribution
Gradient Low-Rank Approximation
Hessian Approximation
🔎 Similar Papers
No similar papers found.