🤖 AI Summary
This work investigates the capability of Transformers to solve sparse recovery (LASSO) problems under in-context learning (ICL), without parameter updates. We formalize ICL as implicit learning-to-optimize (L2O) and theoretically prove that a K-layer Transformer can emulate an L2O process with linear convergence, where the convergence rate improves linearly with K. Our analysis establishes, for the first time, that Transformers not only outperform standard gradient descent but also generalize across unseen measurement matrices, adapt to demonstration pairs of varying lengths, and exploit problem structure to accelerate convergence. Experiments demonstrate strong generalization and efficiency in sparse signal recovery. By circumventing the need for explicit training or fixed optimization architectures—key limitations of conventional L2O—this work provides a novel theoretical lens for understanding the implicit optimization capabilities of large language models.
📝 Abstract
An intriguing property of the Transformer is its ability to perform in-context learning (ICL), where the Transformer can solve different inference tasks without parameter updating based on the contextual information provided by the corresponding input-output demonstration pairs. It has been theoretically proved that ICL is enabled by the capability of Transformers to perform gradient-descent algorithms (Von Oswald et al., 2023a; Bai et al., 2024). This work takes a step further and shows that Transformers can perform learning-to-optimize (L2O) algorithms. Specifically, for the ICL sparse recovery (formulated as LASSO) tasks, we show that a K-layer Transformer can perform an L2O algorithm with a provable convergence rate linear in K. This provides a new perspective explaining the superior ICL capability of Transformers, even with only a few layers, which cannot be achieved by the standard gradient-descent algorithms. Moreover, unlike the conventional L2O algorithms that require the measurement matrix involved in training to match that in testing, the trained Transformer is able to solve sparse recovery problems generated with different measurement matrices. Besides, Transformers as an L2O algorithm can leverage structural information embedded in the training tasks to accelerate its convergence during ICL, and generalize across different lengths of demonstration pairs, where conventional L2O algorithms typically struggle or fail. Such theoretical findings are supported by our experimental results.