🤖 AI Summary
Prior work has overestimated the role of surprisal in predicting reading times, conflating its effect with that of word frequency—a pervasive confound. Method: We introduce a novel orthogonal projection technique that projects language-model surprisal onto the null space of word frequency, thereby statistically isolating contextual predictability from lexical frequency. Combining surprisal with pointwise mutual information (PMI) and applying rigorous orthogonalization, we then quantify the unique variance in reading times explained solely by context via regression. Contribution/Results: Orthogonalized surprisal is uncorrelated with word frequency (r = 0), and its variance explained in reading times drops substantially—by 40–60%—relative to unadjusted surprisal. This demonstrates that context’s independent contribution to reading time prediction is markedly smaller than previously assumed. Our approach establishes a more rigorous, reproducible quantitative benchmark for evaluating contextual effects in language comprehension, challenging the dominant information-theoretic accounts grounded solely in surprisal.
📝 Abstract
We present a new perspective on how readers integrate context during real-time language comprehension. Our proposals build on surprisal theory, which posits that the processing effort of a linguistic unit (e.g., a word) is an affine function of its in-context information content. We first observe that surprisal is only one out of many potential ways that a contextual predictor can be derived from a language model. Another one is the pointwise mutual information (PMI) between a unit and its context, which turns out to yield the same predictive power as surprisal when controlling for unigram frequency. Moreover, both PMI and surprisal are correlated with frequency. This means that neither PMI nor surprisal contains information about context alone. In response to this, we propose a technique where we project surprisal onto the orthogonal complement of frequency, yielding a new contextual predictor that is uncorrelated with frequency. Our experiments show that the proportion of variance in reading times explained by context is a lot smaller when context is represented by the orthogonalized predictor. From an interpretability standpoint, this indicates that previous studies may have overstated the role that context has in predicting reading times.