Concept Influence: Leveraging Interpretability to Improve Performance and Efficiency in Training Data Attribution

📅 2026-02-16
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the limitations of existing training data attribution methods, which are computationally expensive, rely on individual test samples, and are often confounded by syntactic similarity rather than semantic relevance. To overcome these issues, the authors propose the Concept Influence framework, which— for the first time—incorporates semantic concept directions into the attribution process. By leveraging interpretable internal model structures such as linear probes or sparse autoencoders, the method shifts influence modeling from specific data points to semantic directions. This approach substantially enhances both the semantic fidelity and scalability of attribution, while also revealing that probe-based methods serve as efficient first-order approximations of this framework. Experiments demonstrate that Concept Influence achieves performance comparable to classical influence functions on emerging distribution shift benchmarks and real-world post-training datasets, with over an order-of-magnitude improvement in computational efficiency.

Technology Category

Application Category

📝 Abstract
As large language models are increasingly trained and fine-tuned, practitioners need methods to identify which training data drive specific behaviors, particularly unintended ones. Training Data Attribution (TDA) methods address this by estimating datapoint influence. Existing approaches like influence functions are both computationally expensive and attribute based on single test examples, which can bias results toward syntactic rather than semantic similarity. To address these issues of scalability and influence to abstract behavior, we leverage interpretable structures within the model during the attribution. First, we introduce Concept Influence which attribute model behavior to semantic directions (such as linear probes or sparse autoencoder features) rather than individual test examples. Second, we show that simple probe-based attribution methods are first-order approximations of Concept Influence that achieve comparable performance while being over an order-of-magnitude faster. We empirically validate Concept Influence and approximations across emergent misalignment benchmarks and real post-training datasets, and demonstrate they achieve comparable performance to classical influence functions while being substantially more scalable. More broadly, we show that incorporating interpretable structure within traditional TDA pipelines can enable more scalable, explainable, and better control of model behavior through data.
Problem

Research questions and friction points this paper is trying to address.

Training Data Attribution
Concept Influence
Interpretability
Scalability
Model Behavior
Innovation

Methods, ideas, or system contributions that make the work stand out.

Concept Influence
Training Data Attribution
Interpretability
Scalable Attribution
Semantic Directions
🔎 Similar Papers
No similar papers found.