π€ AI Summary
This study addresses the fundamental challenge of attributing performance differences observed during LLM fine-tuningβa limitation unmet by conventional benchmarking. We propose a model diffing framework grounded in cross-encoders, enabling fine-grained, interpretable attribution of capability-level changes. By comparing latent-space representations and quantifying multidimensional competencies, our method maps leaderboard discrepancies to specific gains or losses in functional abilities. Applied to Gemma-2-9B-IT and its SimPO-enhanced variant, we identify substantial improvements in instruction following (+151.7%), safety (+32.8%), and multilingual proficiency (+43.8%), while detecting significant degradation in self-referential reasoning (β44.1%) and hallucination mitigation (β68.5%). This work establishes a novel paradigm for mechanistic interpretability in LLM fine-tuning, shifting evaluation from holistic metrics to capability-specific causal analysis.
π Abstract
As fine-tuning becomes the dominant paradigm for improving large language models (LLMs), understanding what changes during this process is increasingly important. Traditional benchmarking often fails to explain why one model outperforms another. In this work, we use model diffing, a mechanistic interpretability approach, to analyze the specific capability differences between Gemma-2-9b-it and a SimPO-enhanced variant. Using crosscoders, we identify and categorize latent representations that differentiate the two models. We find that SimPO acquired latent concepts predominantly enhance safety mechanisms (+32.8%), multilingual capabilities (+43.8%), and instruction-following (+151.7%), while its additional training also reduces emphasis on model self-reference (-44.1%) and hallucination management (-68.5%). Our analysis shows that model diffing can yield fine-grained insights beyond leaderboard metrics, attributing performance gaps to concrete mechanistic capabilities. This approach offers a transparent and targeted framework for comparing LLMs.