🤖 AI Summary
This work theoretically models the chain-of-thought (CoT) generalization capability of nonlinear Transformer models under unseen tasks and distribution shifts, focusing on how input augmentation—i.e., providing few-shot examples—enables robust multi-step reasoning. Addressing the lack of theoretical analysis for nonlinear attention mechanisms under non-convex optimization in existing literature, we establish the first formal framework for CoT generalization, rigorously characterizing sample and iteration complexity, deriving precise conditions for accurate inference under noisy demonstrations, and proving that CoT strictly outperforms single-step in-context learning (ICL) in generalization. Our methodology integrates non-convex optimization theory, nonlinear attention modeling, generalization error bound derivation, and distributional robustness analysis. Empirical validation confirms the theory: under both noisy examples and distribution shifts, CoT achieves significantly higher reasoning accuracy than ICL.
📝 Abstract
Chain-of-Thought (CoT) is an efficient prompting method that enables the reasoning ability of large language models by augmenting the query using multiple examples with multiple intermediate steps. Despite the empirical success, the theoretical understanding of how to train a Transformer to achieve the CoT ability remains less explored. This is primarily due to the technical challenges involved in analyzing the nonconvex optimization on nonlinear attention models. To the best of our knowledge, this work provides the first theoretical study of training Transformers with nonlinear attention to obtain the CoT generalization capability so that the resulting model can inference on unseen tasks when the input is augmented by examples of the new task. We first quantify the required training samples and iterations to train a Transformer model towards CoT ability. We then prove the success of its CoT generalization on unseen tasks with distribution-shifted testing data. Moreover, we theoretically characterize the conditions for an accurate reasoning output by CoT even when the provided reasoning examples contain noises and are not always accurate. In contrast, in-context learning (ICL), which can be viewed as one-step CoT without intermediate steps, may fail to provide an accurate output when CoT does. These theoretical findings are justified through experiments.