Adaptive Selection of Symbolic Languages for Improving LLM Logical Reasoning

📅 2025-10-12
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Large language models (LLMs) remain constrained in complex logical reasoning due to inaccuracies in translating natural language (NL) to symbolic language (SL), with prior work focusing solely on semantic alignment while overlooking the critical impact of SL formalism selection. Method: We propose the first systematic framework for LLM-based adaptive SL formalism selection—demonstrating that distinct NL logical problems exhibit problem-specific optimal SL representations (e.g., first-order logic, logic programming, SAT)—and integrate this with an NL-to-multiple-SL translation pipeline coupled with domain-specific logical solvers for end-to-end reasoning. Contribution/Results: Evaluated on a heterogeneous benchmark, our approach achieves 96% accuracy, outperforming the best single-SL baseline by 25 percentage points and significantly surpassing both uniform translation and random formalism selection strategies, thereby overcoming the limitations of monolithic formalization paradigms.

Technology Category

Application Category

📝 Abstract
Large Language Models (LLMs) still struggle with complex logical reasoning. While previous works achieve remarkable improvements, their performance is highly dependent on the correctness of translating natural language (NL) problems into a symbolic language (SL). Though numerous works focusing on improving this translation accuracy, they only consider the similarity between the meaning of SL and NL, overlooking another crucial influencing factor, the selection of the target SL type itself. For example, first-order logic language specializes in logical reasoning with categorical syllogisms and complex quantifiers, while Boolean satisfiability formalism excels at representing constraint satisfaction like partial problems. To our knowledge, this is the first paper to claim and verify that different NL logical reasoning problem corresponds to different optimal SL formalization for translation. Based on this, we propose a methods to improve the logical reasoning performance of LLMs by adaptively selecting the most suitable SL for each problem prior to translation. Specifically, we leverage LLMs to select the target SL among first-order logic, logic programming and Boolean satisfiability and then translate the problem in NL to target SL expressions as well as employ the corresponding logical solver to derive the final answer. Experimental results on benchmarks show that our adaptive selection method significantly outperforms translating all into single SL and randomly selecting the SL. On a mixed dataset of these benchmarks, our approach achieves 96% accuracy, which improving performance by 25% compared to the second highest accuracy from the first-order logic translation.
Problem

Research questions and friction points this paper is trying to address.

Adaptively selecting optimal symbolic languages for logical reasoning
Improving LLM performance by matching problems to suitable formalisms
Addressing translation dependency on symbolic language type selection
Innovation

Methods, ideas, or system contributions that make the work stand out.

Adaptively selects optimal symbolic language for each problem
Uses LLMs to choose among three logic formalisms
Translates natural language to selected symbolic expressions
🔎 Similar Papers
No similar papers found.