🤖 AI Summary
This work addresses the challenge of computing influence functions in overparameterized models, where direct inversion of the high-dimensional curvature operator is infeasible and existing random projection methods lack theoretical guarantees. We provide the first rigorous characterization of necessary and sufficient conditions under which random projections preserve influence functions—covering the unregularized, ridge-regularized, and Kronecker-factored approximate settings—and introduce a “leakage” term to handle cases where test gradients lie outside the curvature operator’s range. Leveraging the Johnson–Lindenstrauss lemma and effective dimension analysis, we establish a quantitative relationship between the required projection dimension and the curvature’s rank or effective dimension, thereby offering both theoretical assurance and practical guidance for scalable and accurate influence analysis.
📝 Abstract
Influence functions and related data attribution scores take the form of $g^{\top}F^{-1}g^{\prime}$, where $F\succeq 0$ is a curvature operator. In modern overparameterized models, forming or inverting $F\in\mathbb{R}^{d\times d}$ is prohibitive, motivating scalable influence computation via random projection with a sketch $P \in \mathbb{R}^{m\times d}$. This practice is commonly justified via the Johnson--Lindenstrauss (JL) lemma, which ensures approximate preservation of Euclidean geometry for a fixed dataset. However, JL does not address how sketching behaves under inversion. Furthermore, there is no existing theory that explains how sketching interacts with other widely-used techniques, such as ridge regularization and structured curvature approximations. We develop a unified theory characterizing when projection provably preserves influence functions. When $g,g^{\prime}\in\text{range}(F)$, we show that: 1) Unregularized projection: exact preservation holds iff $P$ is injective on $\text{range}(F)$, which necessitates $m\geq \text{rank}(F)$; 2) Regularized projection: ridge regularization fundamentally alters the sketching barrier, with approximation guarantees governed by the effective dimension of $F$ at the regularization scale; 3) Factorized influence: for Kronecker-factored curvatures $F=A\otimes E$, the guarantees continue to hold for decoupled sketches $P=P_A\otimes P_E$, even though such sketches exhibit row correlations that violate i.i.d. assumptions. Beyond this range-restricted setting, we analyze out-of-range test gradients and quantify a \emph{leakage} term that arises when test gradients have components in $\ker(F)$. This yields guarantees for influence queries on general test points. Overall, this work develops a novel theory that characterizes when projection provably preserves influence and provides principled guidance for choosing the sketch size in practice.