🤖 AI Summary
This study investigates whether language models can infer implicit world knowledge about novel entities using discourse connectives (e.g., “although”, “therefore”). To this end, we construct a benchmark dataset of 8,880 instances and systematically evaluate 12 state-of-the-art models across seven connective categories for semantic reasoning. Departing from prior work focused on connective classification or syntactic modeling, we pioneer the use of connectives as “knowledge probes” to reverse-engineer models’ attribute inference capabilities over unseen entities. Results reveal a consistent and severe bottleneck in modeling concessive relations (e.g., “although…yet”), which persists even after instruction tuning. Regression analysis further uncovers a strong nonlinear association between connective type and reasoning performance. These findings expose a fundamental limitation of current large language models in capturing fine-grained logical distinctions and establish a novel paradigm for evaluating semantic reasoning.
📝 Abstract
The role of world knowledge has been particularly crucial to predict the discourse connective that marks the discourse relation between two arguments, with language models (LMs) being generally successful at this task. We flip this premise in our work, and instead study the inverse problem of understanding whether discourse connectives can inform LMs about the world. To this end, we present WUGNECTIVES, a dataset of 8,880 stimuli that evaluates LMs'inferences about novel entities in contexts where connectives link the entities to particular attributes. On investigating 17 different LMs at various scales, and training regimens, we found that tuning an LM to show reasoning behavior yields noteworthy improvements on most connectives. At the same time, there was a large variation in LMs'overall performance across connective type, with all models systematically struggling on connectives that express a concessive meaning. Our findings pave the way for more nuanced investigations into the functional role of language cues as captured by LMs. We release WUGNECTIVES at https://github.com/sheffwb/wugnectives.