Beyond the Leaderboard: Understanding Performance Disparities in Large Language Models via Model Diffing

πŸ“… 2025-09-23
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
This study addresses the fundamental challenge of attributing performance differences observed during LLM fine-tuningβ€”a limitation unmet by conventional benchmarking. We propose a model diffing framework grounded in cross-encoders, enabling fine-grained, interpretable attribution of capability-level changes. By comparing latent-space representations and quantifying multidimensional competencies, our method maps leaderboard discrepancies to specific gains or losses in functional abilities. Applied to Gemma-2-9B-IT and its SimPO-enhanced variant, we identify substantial improvements in instruction following (+151.7%), safety (+32.8%), and multilingual proficiency (+43.8%), while detecting significant degradation in self-referential reasoning (βˆ’44.1%) and hallucination mitigation (βˆ’68.5%). This work establishes a novel paradigm for mechanistic interpretability in LLM fine-tuning, shifting evaluation from holistic metrics to capability-specific causal analysis.

Technology Category

Application Category

πŸ“ Abstract
As fine-tuning becomes the dominant paradigm for improving large language models (LLMs), understanding what changes during this process is increasingly important. Traditional benchmarking often fails to explain why one model outperforms another. In this work, we use model diffing, a mechanistic interpretability approach, to analyze the specific capability differences between Gemma-2-9b-it and a SimPO-enhanced variant. Using crosscoders, we identify and categorize latent representations that differentiate the two models. We find that SimPO acquired latent concepts predominantly enhance safety mechanisms (+32.8%), multilingual capabilities (+43.8%), and instruction-following (+151.7%), while its additional training also reduces emphasis on model self-reference (-44.1%) and hallucination management (-68.5%). Our analysis shows that model diffing can yield fine-grained insights beyond leaderboard metrics, attributing performance gaps to concrete mechanistic capabilities. This approach offers a transparent and targeted framework for comparing LLMs.
Problem

Research questions and friction points this paper is trying to address.

Understanding performance differences between fine-tuned LLMs beyond benchmarks
Analyzing specific capability changes using mechanistic interpretability methods
Identifying concrete improvements in safety, multilingualism, and instruction-following
Innovation

Methods, ideas, or system contributions that make the work stand out.

Model diffing analyzes capability differences between LLMs
Crosscoders identify latent representations differentiating model variants
Approach attributes performance gaps to concrete mechanistic capabilities
πŸ”Ž Similar Papers
No similar papers found.