🤖 AI Summary
Existing training data attribution (TDA) methods treat sample influence as static, ignoring the phased, non-stationary evolution of neural network learning. Method: We propose the first stage-aware, dynamic TDA framework grounded in singular learning theory, explicitly modeling non-monotonic influence evolution—including sign flips and abrupt peaks—across training phases. Contribution/Results: Through analytical modeling and large-scale language model experiments, we establish systematic mappings between dynamic influence patterns and semantic hierarchy acquisition as well as critical model transition points. We validate theoretical predictions on toy models and observe token-level influence trajectories in real LLMs that closely align with known developmental stages. This work advances TDA from static attribution to a dynamic, phase-sensitive paradigm, offering both a novel theoretical framework and empirical foundation for understanding how model learning mechanisms and data value co-evolve during training.
📝 Abstract
Current training data attribution (TDA) methods treat the influence one sample has on another as static, but neural networks learn in distinct stages that exhibit changing patterns of influence. In this work, we introduce a framework for stagewise data attribution grounded in singular learning theory. We predict that influence can change non-monotonically, including sign flips and sharp peaks at developmental transitions. We first validate these predictions analytically and empirically in a toy model, showing that dynamic shifts in influence directly map to the model's progressive learning of a semantic hierarchy. Finally, we demonstrate these phenomena at scale in language models, where token-level influence changes align with known developmental stages.