Provably Learning from Modern Language Models via Low Logit Rank

📅 2025-12-10
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the empirically pervasive low-logit-rank phenomenon in large language models (LLMs) by introducing the first end-to-end, provably learnable framework tailored to practical generative models. Under the logit query access model, we design a polynomial-time learning algorithm that efficiently and falsifiably learns any approximately low-logit-rank generative model. Our approach integrates low-rank matrix analysis, query-based learning theory, and probabilistic modeling in logit space—transforming the empirical low-rank structure into a computationally exploitable structural prior for the first time. We rigorously establish learning guarantees even for notoriously hard distributions, such as noisy parity functions. The core contribution is the development of the first learnability theory for generative models aligned with LLM behavior, formally demonstrating that low logit rank is not merely an empirical observation but a mathematically tractable inductive bias amenable to rigorous algorithmic exploitation.

Technology Category

Application Category

📝 Abstract
While modern language models and their inner workings are incredibly complex, recent work (Golowich, Liu & Shetty; 2025) has proposed a simple and potentially tractable abstraction for them through the observation that empirically, these language models all seem to have approximately low logit rank. Roughly, this means that a matrix formed by the model's log probabilities of various tokens conditioned on certain sequences of tokens is well approximated by a low rank matrix. In this paper, our focus is on understanding how this structure can be exploited algorithmically for obtaining provable learning guarantees. Since low logit rank models can encode hard-to-learn distributions such as noisy parities, we study a query learning model with logit queries that reflects the access model for common APIs. Our main result is an efficient algorithm for learning any approximately low logit rank model from queries. We emphasize that our structural assumption closely reflects the behavior that is empirically observed in modern language models. Thus, our result gives what we believe is the first end-to-end learning guarantee for a generative model that plausibly captures modern language models.
Problem

Research questions and friction points this paper is trying to address.

Learning from low logit rank language models with provable guarantees
Exploiting low logit rank structure algorithmically for efficient learning
Providing end-to-end learning guarantees for generative models resembling modern LMs
Innovation

Methods, ideas, or system contributions that make the work stand out.

Exploits low logit rank structure for learning
Uses query learning model with logit queries
Provides efficient algorithm for low rank models
🔎 Similar Papers
No similar papers found.