🤖 AI Summary
This study investigates why first-order gradient updates in neural networks struggle to preserve logical consistency, focusing on the structural limitations of the Linear Propagation Assumption (LPA) in relational reasoning. By integrating relational algebra with the geometric structure of feature spaces, the work formally analyzes the constraints imposed by LPA under negation, inverse relations, and relation composition. It reveals that local parameter updates cannot consistently propagate to logical inferences due to a fundamental conflict: negation and inverse relations demand feature disentanglement, whereas relation composition inherently relies on bilinear mappings. This tension provides a unified explanation for several empirical phenomena, including failures in knowledge editing, the reversal curse, and bottlenecks in multi-hop reasoning.
📝 Abstract
Neural networks adapt through first-order parameter updates, yet it remains unclear whether such updates preserve logical coherence. We investigate the geometric limits of the Linear Propagation Assumption (LPA), the premise that local updates coherently propagate to logical consequences. To formalize this, we adopt relation algebra and study three core operations on relations: negation flips truth values, converse swaps argument order, and composition chains relations. For negation and converse, we prove that guaranteeing direction-agnostic first-order propagation necessitates a tensor factorization separating entity-pair context from relation content. However, for composition, we identify a fundamental obstruction. We show that composition reduces to conjunction, and prove that any conjunction well-defined on linear features must be bilinear. Since bilinearity is incompatible with negation, this forces the feature map to collapse. These results suggest that failures in knowledge editing, the reversal curse, and multi-hop reasoning may stem from common structural limitations inherent to the LPA.