🤖 AI Summary
This work addresses the high computational cost and limited scalability of gradient-based data selection methods in large language model (LLM) fine-tuning. Existing proxy models—typically smaller, fixed-architecture networks—suffer from mismatched learning dynamics and an inability to faithfully align with the target LLM’s gradient influence, restricting their effectiveness. To overcome these limitations, we propose Iprox, a novel framework that constructs a lightweight proxy directly from the target LLM by preserving critical influence information through low-rank compression. Iprox jointly aligns both gradients and logits between the proxy and the target model, achieving a controllable trade-off between influence fidelity and computational efficiency. Experiments on Qwen3-4B and Llama3.2 demonstrate that Iprox’s 1.5B proxy outperforms off-the-shelf 1.7B models and significantly surpasses baseline methods on Llama3.2 at less than half the computational cost, markedly improving data selection efficiency and performance.
📝 Abstract
Supervised fine-tuning (SFT) relies critically on selecting training data that most benefits a model's downstream performance. Gradient-based data selection methods such as TracIn and Influence Functions leverage influence to identify useful samples, but their computational cost scales poorly, making them impractical for multi-billion-parameter large language models (LLMs). A common alternative is to use off-the-shelf smaller models as proxies, but they remain suboptimal since their learning dynamics are unclear, their sizes cannot be flexibly adjusted, and they cannot be further aligned with the target model in terms of gradient-based influence estimation. To address these challenges, we introduce Iprox, a two-stage framework that derives influence-preserving proxies directly from the target model. It first applies a low-rank compression stage to preserve influence information of the target model, and then an aligning stage to align both model gradients and logits, thereby constructing proxies that flexibly control computational cost while retaining the target model's influence. Experimental results across diverse LLM families and evaluation tasks show that Iprox consistently outperforms off-the-shelf proxies and baseline methods. On Qwen3-4B, a 1.5B proxy constructed with Iprox achieves stronger performance than the larger 1.7B off-the-shelf proxy. Notably, on Llama3.2, Iprox achieves better performance than baselines while reducing computational cost by more than half relative to the full 3B model. These results show that Iprox provides effective influence-preserving proxies, making gradient-based data selection more scalable for LLMs.