BehaviorBox: Automated Discovery of Fine-Grained Performance Differences Between Language Models

📅 2025-06-02
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenge of automatically detecting fine-grained performance disparities among language models in evaluation. We propose the first automated comparative framework grounded in performance-aware contextual embeddings. Our method combines word-level feature clustering with difference-driven prompt sensitivity analysis to identify interpretable semantic-syntactic patterns—such as subjunctive markers (“were”) and sentence-final exclamation points—that systematically induce behavioral divergence across models. Unlike holistic metrics (e.g., perplexity), our framework uncovers statistically significant and cross-model generalizable behavioral differences. Evaluated across diverse model scales, architectures, and post-training variants, it successfully identifies数十 distinct sensitivity patterns—demonstrating strong reproducibility and explanatory power.

Technology Category

Application Category

📝 Abstract
Language model evaluation is a daunting task: prompts are brittle, corpus-level perplexities are vague, and the choice of benchmarks are endless. Finding examples that show meaningful, generalizable differences between two LMs is crucial to understanding where one model succeeds and another fails. Can this process be done automatically? In this work, we propose methodology for automated comparison of language models that uses performance-aware contextual embeddings to find fine-grained features of text where one LM outperforms another. Our method, which we name BehaviorBox, extracts coherent features that demonstrate differences with respect to the ease of generation between two LMs. Specifically, BehaviorBox finds features that describe groups of words in fine-grained contexts, such as"conditional 'were' in the phrase 'if you were'"and"exclamation marks after emotional statements", where one model outperforms another within a particular datatset. We apply BehaviorBox to compare models that vary in size, model family, and post-training, and enumerate insights into specific contexts that illustrate meaningful differences in performance which cannot be found by measures such as corpus-level perplexity alone.
Problem

Research questions and friction points this paper is trying to address.

Automated discovery of fine-grained performance differences between language models
Identifying specific text features where one LM outperforms another
Comparing models by size, family, and post-training using contextual embeddings
Innovation

Methods, ideas, or system contributions that make the work stand out.

Automated comparison using performance-aware embeddings
Extracts coherent fine-grained text features
Identifies specific contexts for model differences
🔎 Similar Papers
No similar papers found.