Select or Project? Evaluating Lower-dimensional Vectors for LLM Training Data Explanations

📅 2026-01-23
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study addresses the high computational cost of gradient-based explanation methods for large language models, which stems from their extremely high dimensionality, and the lack of systematic evaluation and theoretical grounding for existing dimensionality reduction strategies. The authors construct a new benchmark to systematically compare the performance and efficiency of structure-informed component selection against full-gradient projection in training data influence attribution tasks. They propose and validate an architecture-aware greedy component selection method that significantly reduces computational overhead while preserving high explanation fidelity. Experimental results demonstrate that the selected subset of critical components outperforms both full-gradient and random projection approaches in training data retrieval tasks, achieving superior accuracy and efficiency.

Technology Category

Application Category

📝 Abstract
Gradient-based methods for instance-based explanation for large language models (LLMs) are hindered by the immense dimensionality of model gradients. In practice, influence estimation is restricted to a subset of model parameters to make computation tractable, but this subset is often chosen ad hoc and rarely justified by systematic evaluation. This paper investigates if it is better to create low-dimensional representations by selecting a small, architecturally informed subset of model components or by projecting the full gradients into a lower-dimensional space. Using a novel benchmark, we show that a greedily selected subset of components captures the information about training data influence needed for a retrieval task more effectively than either the full gradient or random projection. We further find that this approach is more computationally efficient than random projection, demonstrating that targeted component selection is a practical strategy for making instance-based explanations of large models more computationally feasible.
Problem

Research questions and friction points this paper is trying to address.

large language models
instance-based explanation
gradient dimensionality
influence estimation
computational tractability
Innovation

Methods, ideas, or system contributions that make the work stand out.

gradient-based explanation
component selection
low-dimensional representation
training data influence
large language models
🔎 Similar Papers
No similar papers found.
L
Lukas Hinterleitner
Faculty of Computer Science, University of Vienna, Vienna, Austria
L
Loris Schoenegger
Faculty of Computer Science, University of Vienna, Vienna, Austria; UniVie Doctoral School Computer Science, University of Vienna, Vienna, Austria
Benjamin Roth
Benjamin Roth
University of Vienna
Natural Language ProcessingMachine Learning