🤖 AI Summary
How can machine learning achieve human-level data efficiency—i.e., rapid generalization from only dozens of examples?
Method: We propose a decoupled inductive program learning framework that decomposes task learning into complementary, specialized mechanisms: symbolic rule induction, reinforcement learning, and online tutoring modeling. An ablation analysis framework is introduced to rigorously evaluate the functional分工 and synergy among these components.
Contribution/Results: We provide the first empirical evidence that multi-mechanism collaboration substantially outperforms either purely symbolic or purely sub-symbolic paradigms in few-shot learning. Crucially, mechanism decoupling itself emerges as a key pathway toward human-like learning efficiency. In small-sample regimes (tens of examples), our approach matches human data efficiency and achieves significantly stronger generalization than any single-mechanism baseline. Moreover, the performance gain from mechanistic decomposition exceeds that attributable to representational paradigm choice, underscoring the centrality of architectural modularity in efficient learning.
📝 Abstract
Human learning relies on specialization -- distinct cognitive mechanisms working together to enable rapid learning. In contrast, most modern neural networks rely on a single mechanism: gradient descent over an objective function. This raises the question: might human learners' relatively rapid learning from just tens of examples instead of tens of thousands in data-driven deep learning arise from our ability to use multiple specialized mechanisms of learning in combination? We investigate this question through an ablation analysis of inductive human learning simulations in online tutoring environments. Comparing reinforcement learning to a more data-efficient 3-mechanism symbolic rule induction approach, we find that decomposing learning into multiple distinct mechanisms significantly improves data efficiency, bringing it in line with human learning. Furthermore, we show that this decomposition has a greater impact on efficiency than the distinction between symbolic and subsymbolic learning alone. Efforts to align data-driven machine learning with human learning often overlook the stark difference in learning efficiency. Our findings suggest that integrating multiple specialized learning mechanisms may be key to bridging this gap.