🤖 AI Summary
This work addresses the limitations of large language models in tasks requiring complex proof planning and the inability of traditional logic solvers to perform commonsense reasoning. To bridge this gap, the authors propose a neuro-symbolic iterative reasoning framework that leverages a logic solver to verify proof attempts generated by a large language model and uses the solver’s feedback to iteratively guide the model in supplying missing commonsense relations. Evaluated on a purely logical reasoning dataset with explicit commonsense information removed, the approach significantly outperforms existing methods, demonstrating the potential of integrating neural and symbolic reasoning for interpretable and reliable inference grounded in human-like contextual understanding.
📝 Abstract
Although Large Language Models (LLMs) have demonstrated impressive formal reasoning abilities, they often break down when problems require complex proof planning. One promising approach for improving LLM reasoning abilities involves translating problems into formal logic and using a logic solver. Although off-the-shelf logic solvers are in principle substantially more efficient than LLMs at logical reasoning, they assume that all relevant facts are provided in a question and are unable to deal with missing commonsense relations. In this work, we propose a novel method that uses feedback from the logic solver to augment a logic problem with commonsense relations provided by the LLM, in an iterative manner. This involves a search procedure through potential commonsense assumptions to maximize the chance of finding useful facts while keeping cost tractable. On a collection of pure-logical reasoning datasets, from which some commonsense information has been removed, our method consistently achieves considerable improvements over existing techniques, demonstrating the value in balancing neural and symbolic elements when working in human contexts.