First is Not Really Better Than Last: Evaluating Layer Choice and Aggregation Strategies in Language Model Data Influence Estimation

📅 2025-11-06
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work challenges the prevailing “first-layer (embedding layer) optimality” assumption in LLM data influence estimation, arguing that its reliance on gradient cancellation is unreliable. Methodologically, it introduces three innovations: (1) theoretical analysis and empirical evidence demonstrate that intermediate attention layers—rather than the embedding layer—more stably encode sample-level influence on model decisions; (2) a cross-layer influence score aggregation mechanism is proposed, leveraging ranking and voting to preserve discriminative signal, avoiding information loss from naive averaging; (3) a novel, retraining-free evaluation metric—Noise Detection Rate (NDR)—is introduced, enabling the first direct, scalable quantification of influence estimation quality. Experiments across multiple LLM scales show that using intermediate layers improves influence identification accuracy by +12.7%–28.3%; the proposed aggregation outperforms baselines by 15.4%–33.6%; and NDR exhibits strong correlation with ground-truth labels (Spearman ρ = 0.89), overcoming longstanding evaluation bottlenecks.

Technology Category

Application Category

📝 Abstract
Identifying how training samples influence/impact Large Language Model (LLM) decision-making is essential for effectively interpreting model decisions and auditing large-scale datasets. Current training sample influence estimation methods (also known as influence functions) undertake this goal by utilizing information flow through the model via its first-order and higher-order gradient terms. However, owing to the large model sizes of today consisting of billions of parameters, these influence computations are often restricted to some subset of model layers to ensure computational feasibility. Prior seminal work by Yeh et al. (2022) in assessing which layers are best suited for computing language data influence concluded that the first (embedding) layers are the most informative for this purpose, using a hypothesis based on influence scores canceling out (i.e., the cancellation effect). In this work, we propose theoretical and empirical evidence demonstrating how the cancellation effect is unreliable, and that middle attention layers are better estimators for influence. Furthermore, we address the broader challenge of aggregating influence scores across layers, and showcase how alternatives to standard averaging (such as ranking and vote-based methods) can lead to significantly improved performance. Finally, we propose better methods for evaluating influence score efficacy in LLMs without undertaking model retraining, and propose a new metric known as the Noise Detection Rate (NDR) that exhibits strong predictive capability compared to the cancellation effect. Through extensive experiments across LLMs of varying types and scales, we concretely determine that the first (layers) are not necessarily better than the last (layers) for LLM influence estimation, contrasting with prior knowledge in the field.
Problem

Research questions and friction points this paper is trying to address.

Evaluating optimal layer selection strategies for LLM influence estimation
Challenging prior assumptions about embedding layer superiority in influence analysis
Developing improved aggregation methods for cross-layer influence scores
Innovation

Methods, ideas, or system contributions that make the work stand out.

Using middle attention layers for influence estimation
Aggregating scores via ranking and vote methods
Introducing Noise Detection Rate metric for evaluation
🔎 Similar Papers
D
Dmytro Vitel
Bellini College of Artificial Intelligence, Cybersecurity, and Computing, University of South Florida
Anshuman Chhabra
Anshuman Chhabra
Assistant Professor of Computer Science and Engineering, University of South Florida
AI SafetyRobust AITrustworthy AI