Is my model perplexed for the right reason? Contrasting LLMs' Benchmark Behavior with Token-Level Perplexity

πŸ“… 2026-03-31
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
Current evaluations of large language models predominantly focus on task performance, offering limited insight into whether models rely on linguistically principled reasoning mechanisms and remaining susceptible to confirmation bias. This work proposes an interpretability framework based on token-level perplexity, which examines the distribution of perplexity across minimally contrasting sentence pairs that differ only at critical linguistic tokens. By doing so, it tests whether models condition their predictions on expected linguistic cues. This approach represents the first application of token-level perplexity for hypothesis-driven validation of linguistic mechanisms, circumventing the instability inherent in conventional feature attribution methods. Experiments across multiple open-source large language models reveal that while key tokens significantly influence model behavior, they alone cannot fully account for observed perplexity variations, indicating that models continue to rely on unintended heuristic strategies.
πŸ“ Abstract
Standard evaluations of Large language models (LLMs) focus on task performance, offering limited insight into whether correct behavior reflects appropriate underlying mechanisms and risking confirmation bias. We introduce a simple, principled interpretability framework based on token-level perplexity to test whether models rely on linguistically relevant cues. By comparing perplexity distributions over minimal sentence pairs differing in one or a few `pivotal' tokens, our method enables precise, hypothesis-driven analysis without relying on unstable feature-attribution techniques. Experiments on controlled linguistic benchmarks with several open-weight LLMs show that, while linguistically important tokens influence model behavior, they never fully explain perplexity shifts, revealing that models rely on heuristics other than the expected linguistic ones.
Problem

Research questions and friction points this paper is trying to address.

large language models
model interpretability
token-level perplexity
linguistic cues
benchmark evaluation
Innovation

Methods, ideas, or system contributions that make the work stand out.

token-level perplexity
interpretability framework
minimal pairs
linguistic cues
large language models
πŸ”Ž Similar Papers
No similar papers found.
Z
ZoΓ« Prins
College of Informatics, University of Amsterdam
S
Samuele Punzo
College of Informatics, University of Amsterdam
F
Frank Wildenburg
College of Informatics, University of Amsterdam
Giovanni CinΓ 
Giovanni CinΓ 
Amsterdam University Medical Center | University of Amsterdam
Medical AIMachine LearningMathematical Logic
Sandro Pezzelle
Sandro Pezzelle
Assistant Professor at ILLC, University of Amsterdam
Natural Language ProcessingMultimodal Machine LearningAICognitive science