๐ค AI Summary
Large language models exhibit limited performance on multi-step symbolic reasoning tasks, such as mathematical problem solving. To address this, we introduce a novel โbuffer-basedโ paradigm grounded in symbolic multi-step reasoning data construction and Transformer attention mechanism analysis. Our approach decouples vertical (inter-layer) and horizontal (inter-step) reasoning pathways, enabling dynamic information storage and selective retrieval. We further propose a query-key-driven buffer modeling framework and a stochastic matrix optimization algorithm to enhance information scheduling efficiency. Evaluated on the PrOntoQA benchmark, our method reduces GPT-2โs generalization training time by 75%. Beyond empirical gains, this work identifies critical information propagation bottlenecks in chain-of-thought reasoning and establishes an interpretable, scalable framework for enhancing multi-step symbolic reasoning in neural language models.
๐ Abstract
Large language models have consistently struggled with complex reasoning tasks, such as mathematical problem-solving. Investigating the internal reasoning mechanisms of these models can help us design better model architectures and training strategies, ultimately enhancing their reasoning capability. In this study, we constructed a symbolic dataset to investigate the mechanisms by which Transformer models employ vertical thinking strategy based on their inherent structure and horizontal thinking strategy based on Chain of Thought to achieve multi-step reasoning. We introduced the concept of buffer mechanism: the model stores various information in distinct buffers and selectively extracts them through the query-key matrix. We proposed a random matrix-based algorithm to enhance the model's reasoning ability, resulting in a 75% reduction in the training time required for the GPT-2 model to achieve generalization capability on the PrOntoQA dataset. These findings provide new insights into understanding the mechanisms of large language models.