π€ AI Summary
Current evaluations of large language models predominantly focus on task performance, offering limited insight into whether models rely on linguistically principled reasoning mechanisms and remaining susceptible to confirmation bias. This work proposes an interpretability framework based on token-level perplexity, which examines the distribution of perplexity across minimally contrasting sentence pairs that differ only at critical linguistic tokens. By doing so, it tests whether models condition their predictions on expected linguistic cues. This approach represents the first application of token-level perplexity for hypothesis-driven validation of linguistic mechanisms, circumventing the instability inherent in conventional feature attribution methods. Experiments across multiple open-source large language models reveal that while key tokens significantly influence model behavior, they alone cannot fully account for observed perplexity variations, indicating that models continue to rely on unintended heuristic strategies.
π Abstract
Standard evaluations of Large language models (LLMs) focus on task performance, offering limited insight into whether correct behavior reflects appropriate underlying mechanisms and risking confirmation bias. We introduce a simple, principled interpretability framework based on token-level perplexity to test whether models rely on linguistically relevant cues. By comparing perplexity distributions over minimal sentence pairs differing in one or a few `pivotal' tokens, our method enables precise, hypothesis-driven analysis without relying on unstable feature-attribution techniques. Experiments on controlled linguistic benchmarks with several open-weight LLMs show that, while linguistically important tokens influence model behavior, they never fully explain perplexity shifts, revealing that models rely on heuristics other than the expected linguistic ones.