π€ AI Summary
This work investigates how to effectively leverage reward signals during post-training to enhance the performance of autoregressive models while overcoming the limitations imposed by the base modelβs support set. For sequence prediction tasks satisfying a Ξ³-margin condition, the authors propose a policy gradient method that integrates both outcome-based and process-based rewards. They introduce likelihood quantile (LQ) analysis to characterize the performance barrier inherent in the base model. By employing process rewards to mitigate the curse of dimensionality and combining them with stochastic gradient descent equipped with an adaptive learning rate, they establish the near-optimality of the proposed policy gradient approach under this setting. The method achieves $(1-\varepsilon)$-high-likelihood predictions within nontrivial likelihood regions of the base model, attains minimax-optimal query complexity, and substantially reduces the exponential dependence on sequence length $N$.
π Abstract
We study post-training linear autoregressive models with outcome and process rewards. Given a context $\boldsymbol{x}$, the model must predict the response $\boldsymbol{y} \in Y^N$, a sequence of length $N$ that satisfies a $\gamma$ margin condition, an extension of the standard separability to sequences. We prove that on test samples where the base model achieves a non-trivial likelihood $\alpha$, a variant of policy gradient (PG) can achieve likelihood $1 - \varepsilon$ with an essentially minimax optimal number of reward queries $\tilde{O}((\alpha^{-1} + \varepsilon^{-1})/\gamma^2)$. However, a barrier arises for going beyond the support of the base model. We prove that the overall expected error after post-training with outcome rewards is governed by a property of the base model called the Likelihood Quantile (LQ), and that variants of PG, while minimax optimal, may require a number of reward queries exponential in $N$ to go beyond this support, regardless of the pre-training algorithm. To overcome this barrier, we study post-training with a process reward model, and demonstrate how PG variants in this setting avoid the curse of dimensionality in $N$ via dependence on a token-level LQ. Along the way, we prove that under the margin condition, SGD with adaptive learning rate (LR) achieves a near optimal test error for statistical learning, and PG with adaptive LR achieves a near optimal number of mistakes for online learning while being computationally efficient whenever possible, both of which may be of independent interest.