🤖 AI Summary
Traditional neuro-symbolic learning suffers from poor generalization in complex reasoning, while purely neural foundation models (e.g., LLMs) exhibit strong capabilities but lack interpretability and reliability. Method: We propose “neuro-symbolic prompting”—a new paradigm that replaces conventional joint training with prompt engineering, thereby circumventing inherent generalization bottlenecks in computation, data, and program synthesis. Contribution/Results: We systematically identify three fundamental limitations of prior approaches and advocate leveraging the native capabilities of foundation models to fulfill neuro-symbolic objectives: integrating symbolic programs with zero- or few-shot inference frameworks, enabling general neuro-symbolic reasoning without training task-specific modules. Experiments demonstrate substantial improvements in generalization, reliability, and interpretability. Our approach advances neuro-symbolic learning toward lightweight, scalable, and modular architectures—marking a shift from monolithic, parameter-heavy designs to efficient, prompt-driven reasoning.
📝 Abstract
Neuro-symbolic learning was proposed to address challenges with training neural networks for complex reasoning tasks with the added benefits of interpretability, reliability, and efficiency. Neuro-symbolic learning methods traditionally train neural models in conjunction with symbolic programs, but they face significant challenges that limit them to simplistic problems. On the other hand, purely-neural foundation models now reach state-of-the-art performance through prompting rather than training, but they are often unreliable and lack interpretability. Supplementing foundation models with symbolic programs, which we call neuro-symbolic prompting, provides a way to use these models for complex reasoning tasks. Doing so raises the question: What role does specialized model training as part of neuro-symbolic learning have in the age of foundation models? To explore this question, we highlight three pitfalls of traditional neuro-symbolic learning with respect to the compute, data, and programs leading to generalization problems. This position paper argues that foundation models enable generalizable neuro-symbolic solutions, offering a path towards achieving the original goals of neuro-symbolic learning without the downsides of training from scratch.