LayerIF: Estimating Layer Quality for Large Language Models using Influence Functions

πŸ“… 2025-05-27
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
Existing layer-wise evaluation methods for large language models (LLMs) overlook data influence and assume uniform training quality across layers, failing to capture task-dependent functional specialization. Method: We propose LayerIFβ€”the first data-driven, influence-function-based framework for layer-wise quality assessment. LayerIF computes the influence of individual training samples on validation loss per layer, yielding task-sensitive layer importance scores without architectural assumptions. Contribution/Results: LayerIF is model-agnostic and task-adaptive. Experiments across diverse LLM architectures demonstrate that LayerIF-guided LoRA-MoE expert assignment and layer-wise sparsification significantly improve downstream performance. The method exhibits strong generalization and plug-and-play usability, enabling effective layer-aware adaptation without retraining or architecture modification.

Technology Category

Application Category

πŸ“ Abstract
Pretrained Large Language Models (LLMs) achieve strong performance across a wide range of tasks, yet exhibit substantial variability in the various layers' training quality with respect to specific downstream applications, limiting their downstream performance.It is therefore critical to estimate layer-wise training quality in a manner that accounts for both model architecture and training data. However, existing approaches predominantly rely on model-centric heuristics (such as spectral statistics, outlier detection, or uniform allocation) while overlooking the influence of data. To address these limitations, we propose LayerIF, a data-driven framework that leverages Influence Functions to quantify the training quality of individual layers in a principled and task-sensitive manner. By isolating each layer's gradients and measuring the sensitivity of the validation loss to training examples by computing layer-wise influences, we derive data-driven estimates of layer importance. Notably, our method produces task-specific layer importance estimates for the same LLM, revealing how layers specialize for different test-time evaluation tasks. We demonstrate the utility of our scores by leveraging them for two downstream applications: (a) expert allocation in LoRA-MoE architectures and (b) layer-wise sparsity distribution for LLM pruning. Experiments across multiple LLM architectures demonstrate that our model-agnostic, influence-guided allocation leads to consistent gains in task performance.
Problem

Research questions and friction points this paper is trying to address.

Estimating layer quality in LLMs for downstream tasks
Addressing data influence overlooked by model-centric heuristics
Providing task-specific layer importance for LLM optimization
Innovation

Methods, ideas, or system contributions that make the work stand out.

Leverages Influence Functions for layer quality estimation
Isolates layer gradients for data-driven importance scores
Enables task-specific layer allocation in LLM architectures
πŸ”Ž Similar Papers
No similar papers found.