🤖 AI Summary
Current Transformer-based models exhibit fundamental limitations in System 2 reasoning—i.e., human-like, multi-step, generalizable, and adaptive deep reasoning—particularly in compositional generalization and rapid adaptation to novel rules. This work formally identifies *generalization* and *adaptation* as the two core bottlenecks of System 2 reasoning. We propose a unified framework integrating intent modeling, neuro-symbolic collaboration, meta-learning, and multi-step reinforcement learning. Technically, it synergistically combines program synthesis, symbolic reasoning, large language models, and multi-step RL, moving beyond pure end-to-end paradigms. Furthermore, we introduce the first AGI-oriented evaluation benchmark for System 2 reasoning, systematically assessing generalization, adaptation, and reasoning interpretability. Our contributions include: (1) a theoretically grounded characterization of System 2 reasoning constraints; (2) a novel architectural and algorithmic framework bridging neural and symbolic AI; and (3) an empirically validated, multidimensional evaluation methodology—laying foundational theory and methodology for generalizable, adaptive deep reasoning models.
📝 Abstract
While significant progress has been made in task-specific applications, current models struggle with deep reasoning, generality, and adaptation -- key components of System 2 reasoning that are crucial for achieving Artificial General Intelligence (AGI). Despite the promise of approaches such as program synthesis, language models, and transformers, these methods often fail to generalize beyond their training data and to adapt to novel tasks, limiting their ability to perform human-like reasoning. This paper explores the limitations of existing approaches in achieving advanced System 2 reasoning and highlights the importance of generality and adaptation for AGI. Moreover, we propose four key research directions to address these gaps: (1) learning human intentions from action sequences, (2) combining symbolic and neural models, (3) meta-learning for unfamiliar environments, and (4) reinforcement learning to reason multi-step. Through these directions, we aim to advance the ability to generalize and adapt, bringing computational models closer to the reasoning capabilities required for AGI.