🤖 AI Summary
This work investigates large language models’ (LLMs) ability to distinguish semantic relatedness from genuine causal explanatory relationships. To this end, we introduce CLEAR-3K—the first benchmark dataset dedicated to causal explanation discrimination—comprising 3,000 assertion-reason pairs. We propose a standardized binary discrimination evaluation framework and employ robust metrics, notably Matthews Correlation Coefficient (MCC), to systematically assess 21 LLMs spanning 0.5B–72B parameters. Our key findings reveal: (1) LLMs consistently conflate semantic similarity with causality; and (2) their causal judgment strategy shifts from overly conservative to overly permissive as parameter count increases. The best-performing model achieves only MCC = 0.55, exposing a fundamental limitation in current LLMs’ causal explanatory reasoning. CLEAR-3K establishes a reproducible, scalable, and rigorous benchmark for evaluating causal reasoning capabilities in foundation models.
📝 Abstract
We introduce CLEAR-3K, a dataset of 3,000 assertion-reasoning questions designed to evaluate whether language models can determine if one statement causally explains another. Each question present an assertion-reason pair and challenge language models to distinguish between semantic relatedness and genuine causal explanatory relationships. Through comprehensive evaluation of 21 state-of-the-art language models (ranging from 0.5B to 72B parameters), we identify two fundamental findings. First, language models frequently confuse semantic similarity with causality, relying on lexical and semantic overlap instead of inferring actual causal explanatory relationships. Second, as parameter size increases, models tend to shift from being overly skeptical about causal relationships to being excessively permissive in accepting them. Despite this shift, performance measured by the Matthews Correlation Coefficient plateaus at just 0.55, even for the best-performing models.Hence, CLEAR-3K provides a crucial benchmark for developing and evaluating genuine causal reasoning in language models, which is an essential capability for applications that require accurate assessment of causal relationships.