š¤ AI Summary
The fundamental mechanism underlying large language model (LLM) reasoningāparticularly its cross-task generalizationāremains poorly understood. Method: We establish a formal theoretical correspondence between LLM inference and meta-learning optimization, framing single-query reasoning as an *in-task* meta-optimization process: each question is a distinct task; the chain-of-thought (CoT) trajectory implicitly implements pseudo-gradient descent for latent parameter adaptationāwithout explicit weight updates. We formalize this via pseudo-gradient modeling, trajectory-driven implicit parameter updates, and task-level reasoning abstraction. Contribution/Results: This work introduces the āreasoning-as-parameter-adaptationā paradigm, providing the first systematic mapping between LLM inference dynamics and meta-learning inner-loop optimization. Empirical validation across multiple reasoning benchmarks confirms strong consistency between CoT trajectories and meta-optimized adaptation paths. Beyond unifying conceptual foundations, our framework enables direct transfer of established meta-learning algorithmsāincluding MAML and Reptileāto enhance LLM reasoning capabilities in practice.
š Abstract
We propose a novel framework for comprehending the reasoning capabilities of large language models (LLMs) through the perspective of meta-learning. By conceptualizing reasoning trajectories as pseudo-gradient descent updates to the LLM's parameters, we identify parallels between LLM reasoning and various meta-learning paradigms. We formalize the training process for reasoning tasks as a meta-learning setup, with each question treated as an individual task, and reasoning trajectories serving as the inner loop optimization for adapting model parameters. Once trained on a diverse set of questions, the LLM develops fundamental reasoning capabilities that can generalize to previously unseen questions. Extensive empirical evaluations substantiate the strong connection between LLM reasoning and meta-learning, exploring several issues of significant interest from a meta-learning standpoint. Our work not only enhances the understanding of LLM reasoning but also provides practical insights for improving these models through established meta-learning techniques.