Rethinking Reasoning in LLMs: Neuro-Symbolic Local RetoMaton Beyond ICL and CoT

📅 2025-08-22
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing prompt-based reasoning methods—such as chain-of-thought (CoT) and in-context learning (ICL)—rely on implicit, brittle mechanisms, rendering outputs highly sensitive to seed selection, formatting, and minor prompt perturbations, thereby compromising stability and interpretability. To address this, we propose a neuro-symbolic, locally weighted finite automaton (WFA) reasoning framework that replaces global context and handcrafted prompts with task-adaptive, structured symbolic memory, enabling context-aware, verifiable, and modular multi-step reasoning. By compiling domain-specific corpora into lightweight local WFAs and orchestrating them synergistically with large language models (LLMs), our approach significantly enhances transparency and traceability of the reasoning process. Evaluated on LLaMA-3.2-1B and Gemma-3-1B-PT, it consistently outperforms baseline and state-of-the-art prompting methods on TriviaQA, GSM8K, and MMLU—achieving both substantial accuracy gains and fully reproducible, auditable reasoning traces.

Technology Category

Application Category

📝 Abstract
Prompt-based reasoning strategies such as Chain-of-Thought (CoT) and In-Context Learning (ICL) have become widely used for eliciting reasoning capabilities in large language models (LLMs). However, these methods rely on fragile, implicit mechanisms often yielding inconsistent outputs across seeds, formats, or minor prompt variations making them fundamentally unreliable for tasks requiring stable, interpretable reasoning. In contrast, automata-based neuro-symbolic frameworks like RetoMaton offer a more structured and trustworthy alternative by grounding retrieval in symbolic memory with deterministic transitions. In this work, we extend RetoMaton by replacing its global datastore with a local, task-adaptive Weighted Finite Automaton (WFA), constructed directly from external domain corpora. This local automaton structure promotes robust, context-aware retrieval while preserving symbolic traceability and low inference overhead. Unlike prompting, which entangles context and memory in opaque ways, our approach leverages the explicit structure of WFAs to provide verifiable and modular retrieval behavior, making it better suited for domain transfer and interoperability. We evaluate this local RetoMaton variant on two pretrained LLMs LLaMA-3.2-1B and Gemma-3-1B-PT across three reasoning tasks: TriviaQA (reading comprehension), GSM8K (multi-step math), and MMLU (domain knowledge). Compared to the base model and prompting-based methods, augmenting these setups with local RetoMaton consistently improves performance while enabling transparent and reproducible retrieval dynamics. Our results highlight a promising shift toward trustworthy, symbolic reasoning in modern LLMs via lightweight, automaton-guided memory.
Problem

Research questions and friction points this paper is trying to address.

Improving reasoning reliability in LLMs beyond prompting methods
Providing stable interpretable reasoning with deterministic transitions
Enabling verifiable modular retrieval for domain transfer tasks
Innovation

Methods, ideas, or system contributions that make the work stand out.

Local Weighted Finite Automaton for retrieval
Symbolic memory with deterministic transitions
Context-aware robust retrieval with traceability
🔎 Similar Papers
No similar papers found.