🤖 AI Summary
This work identifies a fundamental flaw in the widely adopted assumption of symbolic concept independence in neural-symbolic (NeSy) predictors: it formally constrains models’ capacity to represent compositional uncertainty among concepts, thereby preventing detection of “reasoning shortcuts”—cases where predictions are correct but underlying logical reasoning is flawed. Integrating probabilistic inference, formal verification, and neural-symbolic integration, we provide the first rigorous proof of the causal relationship between the independence assumption and reasoning shortcuts, resolving a long-standing debate in the field. Our analysis establishes theoretical limits on uncertainty modeling in NeSy systems and yields concrete design principles for avoiding reasoning shortcuts. These contributions advance the foundations of interpretable and trustworthy hybrid intelligence, offering principled guidance for developing robust, logically sound NeSy architectures.
📝 Abstract
The ubiquitous independence assumption among symbolic concepts in neurosymbolic (NeSy) predictors is a convenient simplification: NeSy predictors use it to speed up probabilistic reasoning. Recent works like van Krieken et al. (2024) and Marconato et al. (2024) argued that the independence assumption can hinder learning of NeSy predictors and, more crucially, prevent them from correctly modelling uncertainty. There is, however, scepticism in the NeSy community around the scenarios in which the independence assumption actually limits NeSy systems (Faronius and Dos Martires, 2025). In this work, we settle this question by formally showing that assuming independence among symbolic concepts entails that a model can never represent uncertainty over certain concept combinations. Thus, the model fails to be aware of reasoning shortcuts, i.e., the pathological behaviour of NeSy predictors that predict correct downstream tasks but for the wrong reasons.