🤖 AI Summary
This paper investigates whether neural networks can achieve human-like systematic compositionality—i.e., flexible generalization across structural domains—via meta-learning, challenging optimistic claims about meta-learning’s compositional potential.
Method: Grounded in cognitive science principles, we design a task decomposition framework and evaluation methodology sensitive to structural variation, and empirically test mainstream meta-learning approaches on compositional benchmarks (SCAN, gSCAN).
Contribution/Results: We demonstrate that current meta-learning models exhibit only superficial compositional behavior under highly constrained data distributions and task constructions; they fail to generalize robustly across syntactic or semantic structures. Our analysis reveals that meta-learning does not overcome the core challenge posed by symbolic systems theory to connectionist models—namely, the inability to inherently represent and manipulate structured symbolic knowledge. This work establishes critical theoretical boundaries for compositional modeling and introduces a rigorous, structure-aware evaluation paradigm for assessing systematic generalization in neural systems.
📝 Abstract
Strong meta-learning capabilities for systematic compositionality are emerging as an important skill for navigating the complex and changing tasks of today's world. However, in presenting models for robust adaptation to novel environments, it is important to refrain from making unsupported claims about the performance of meta-learning systems that ultimately do not stand up to scrutiny. While Fodor and Pylyshyn famously posited that neural networks inherently lack this capacity as they are unable to model compositional representations or structure-sensitive operations, and thus are not a viable model of the human mind, Lake and Baroni recently presented meta-learning as a pathway to compositionality. In this position paper, we critically revisit this claim and highlight limitations in the proposed meta-learning framework for compositionality. Our analysis shows that modern neural meta-learning systems can only perform such tasks, if at all, under a very narrow and restricted definition of a meta-learning setup. We therefore claim that `Fodor and Pylyshyn's legacy' persists, and to date, there is no human-like systematic compositionality learned in neural networks.