Simple LLM Baselines are Competitive for Model Diffing

📅 2026-02-10
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the lack of a systematic framework in current large language model (LLM) evaluation methodologies, which hinders effective detection of behavioral differences or alignment drift across model versions. We propose the first unified evaluation framework tailored for model diffing, enabling quantitative comparison of LLM-based and sparse autoencoder (SAE)-based approaches along three dimensions: generalization, interestingness, and level of abstraction. By introducing an enhanced LLM baseline—leveraging natural language descriptions generated by LLMs combined with multidimensional metrics—we demonstrate that this approach matches or rivals the performance of more complex SAE methods across multiple axes, while excelling at uncovering high-level, abstract behavioral discrepancies. Our method establishes an efficient and interpretable paradigm for analyzing model evolution.

Technology Category

Application Category

📝 Abstract
Standard LLM evaluations only test capabilities or dispositions that evaluators designed them for, missing unexpected differences such as behavioral shifts between model revisions or emergent misaligned tendencies. Model diffing addresses this limitation by automatically surfacing systematic behavioral differences. Recent approaches include LLM-based methods that generate natural language descriptions and sparse autoencoder (SAE)-based methods that identify interpretable features. However, no systematic comparison of these approaches exists nor are there established evaluation criteria. We address this gap by proposing evaluation metrics for key desiderata (generalization, interestingness, and abstraction level) and use these to compare existing methods. Our results show that an improved LLM-based baseline performs comparably to the SAE-based method while typically surfacing more abstract behavioral differences.
Problem

Research questions and friction points this paper is trying to address.

model diffing
LLM evaluation
behavioral differences
emergent misalignment
evaluation criteria
Innovation

Methods, ideas, or system contributions that make the work stand out.

model diffing
LLM baselines
sparse autoencoder
behavioral differences
evaluation metrics