Improving Chain-of-Thought Reasoning via Quasi-Symbolic Abstractions

📅 2025-02-18
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Large language models (LLMs) suffer from reduced robustness and faithfulness in chain-of-thought (CoT) reasoning due to content bias. Method: We propose QuaSAR—a quasi-symbolic abstraction reasoning framework that achieves the first logic-semantic decoupling in natural-language CoT, striking a balance between natural language and formal representation while avoiding the bottlenecks of full symbolicization. Its core innovation is selective symbolization of only critical variables and predicates, enabling synergistic symbolic–natural-language reasoning. QuaSAR integrates quasi-symbolic prompting, abstraction-level control, context learning optimization, and distilled small-model demonstration construction. Results: On benchmarks including MMLU-Redux and GSM-Symbolic, QuaSAR improves CoT accuracy by up to 8%, significantly enhancing reasoning robustness and consistency under adversarial perturbations.

Technology Category

Application Category

📝 Abstract
Chain-of-Though (CoT) represents a common strategy for reasoning in Large Language Models (LLMs) by decomposing complex tasks into intermediate inference steps. However, explanations generated via CoT are susceptible to content biases that negatively affect their robustness and faithfulness. To mitigate existing limitations, recent work has proposed using logical formalisms coupled with external symbolic solvers. However, fully symbolic approaches possess the bottleneck of requiring a complete translation from natural language to formal languages, a process that affects efficiency and flexibility. To achieve a trade-off, this paper investigates methods to disentangle content from logical reasoning without a complete formalisation. In particular, we present QuaSAR (for Quasi-Symbolic Abstract Reasoning), a variation of CoT that guides LLMs to operate at a higher level of abstraction via quasi-symbolic explanations. Our framework leverages the capability of LLMs to formalise only relevant variables and predicates, enabling the coexistence of symbolic elements with natural language. We show the impact of QuaSAR for in-context learning and for constructing demonstrations to improve the reasoning capabilities of smaller models. Our experiments show that quasi-symbolic abstractions can improve CoT-based methods by up to 8% accuracy, enhancing robustness and consistency on challenging adversarial variations on both natural language (i.e. MMLU-Redux) and symbolic reasoning tasks (i.e., GSM-Symbolic).
Problem

Research questions and friction points this paper is trying to address.

Enhance Chain-of-Thought reasoning robustness
Mitigate content biases in reasoning steps
Combine symbolic and natural language elements
Innovation

Methods, ideas, or system contributions that make the work stand out.

Quasi-Symbolic Abstract Reasoning
Enhanced Chain-of-Thought Robustness
Partial Formalisation in LLMs
🔎 Similar Papers
No similar papers found.
Leonardo Ranaldi
Leonardo Ranaldi
University of Edinburgh
Natural Language ProcessingMachine LearningArtificial Intelligence
Marco Valentino
Marco Valentino
University of Sheffield
Natural Language ProcessingNeurosymbolic AIExplanation
A
Alexander Polonsky
BLOOM Social Analytics, Paris, France
A
André Freitas
Department of Computer Science, University of Manchester, UK; National Biomarker Centre (NBC), CRUK Manchester Institute, UK