Neuro-Symbolic Artificial Intelligence: Towards Improving the Reasoning Abilities of Large Language Models

📅 2025-08-19
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Insufficient reasoning capabilities in large language models (LLMs) hinder their progression toward artificial general intelligence (AGI). To address this, we propose a systematic enhancement pathway grounded in the neurosymbolic AI paradigm, organized along three synergistic mechanisms: Symbolic→LLM, LLM→Symbolic, and LLM+Symbolic. Our work establishes the first three-dimensional taxonomy—spanning methodology, challenges, and future directions—for neurosymbolic reasoning. We innovatively integrate neurosymbolic learning, formal task modeling, and LLM–symbolic engine co-architectures, yielding the most comprehensive survey of neurosymbolic reasoning techniques to date. Complementing this theoretical contribution, we open-source a GitHub platform that unifies benchmarks, toolkits, and reproducible case studies, thereby providing a unified framework and actionable guidelines for both fundamental research and practical deployment.

Technology Category

Application Category

📝 Abstract
Large Language Models (LLMs) have shown promising results across various tasks, yet their reasoning capabilities remain a fundamental challenge. Developing AI systems with strong reasoning capabilities is regarded as a crucial milestone in the pursuit of Artificial General Intelligence (AGI) and has garnered considerable attention from both academia and industry. Various techniques have been explored to enhance the reasoning capabilities of LLMs, with neuro-symbolic approaches being a particularly promising way. This paper comprehensively reviews recent developments in neuro-symbolic approaches for enhancing LLM reasoning. We first present a formalization of reasoning tasks and give a brief introduction to the neurosymbolic learning paradigm. Then, we discuss neuro-symbolic methods for improving the reasoning capabilities of LLMs from three perspectives: Symbolic->LLM, LLM->Symbolic, and LLM+Symbolic. Finally, we discuss several key challenges and promising future directions. We have also released a GitHub repository including papers and resources related to this survey: https://github.com/LAMDASZ-ML/Awesome-LLM-Reasoning-with-NeSy.
Problem

Research questions and friction points this paper is trying to address.

Improving reasoning abilities of large language models
Addressing fundamental challenge in achieving artificial general intelligence
Exploring neuro-symbolic approaches for enhanced LLM reasoning
Innovation

Methods, ideas, or system contributions that make the work stand out.

Neuro-symbolic integration enhances LLM reasoning
Three-way interaction between symbols and LLMs
Formalized reasoning tasks with neurosymbolic paradigm
🔎 Similar Papers
No similar papers found.
Xiao-Wen Yang
Xiao-Wen Yang
PHD student, Nanjing University
neural-symbolic learningweak-supervised learninglarge language model
Jie-Jing Shao
Jie-Jing Shao
Nanjing University
Machine LearningNeuro-Symbolic LearningReinforcement Learning
Lan-Zhe Guo
Lan-Zhe Guo
LAMDA Group, Nanjing University
Machine Learning
B
Bo-Wen Zhang
National Key Laboratory for Novel Software Technology, Nanjing University, China; School of Intelligence Science and Technology, Nanjing University, China
Z
Zhi Zhou
National Key Laboratory for Novel Software Technology, Nanjing University, China
Lin-Han Jia
Lin-Han Jia
LAMDA Group, Nanjing University
Machine Learning
W
Wang-Zhou Dai
National Key Laboratory for Novel Software Technology, Nanjing University, China; School of Intelligence Science and Technology, Nanjing University, China
Yu-Feng Li
Yu-Feng Li
Professor, Nanjing University
Machine Learning