Next-Token Prediction Should be Ambiguity-Sensitive: A Meta-Learning Perspective

📅 2025-06-19
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Autoregressive large language models suffer performance degradation in highly ambiguous contexts due to blind averaging over predictions, hindering Bayesian-optimal inference and efficient allocation of computational resources. To address this, we propose an ambiguity-sensitive next-token prediction paradigm—introducing, for the first time, cognitive science-inspired hierarchical ambiguity resolution mechanisms into autoregressive modeling. Our approach decouples task-level inference from token-level generation and incorporates ambiguity-aware inductive biases. Built upon an interpretable and scalable Monte Carlo prediction architecture, we empirically demonstrate the inherent ambiguity non-robustness of standard Transformers on the MetaHMM meta-learning benchmark. The modified model achieves significant improvements in prediction accuracy under high-ambiguity conditions, while simultaneously attaining superior computational efficiency and inference scalability.

Technology Category

Application Category

📝 Abstract
The rapid adaptation ability of auto-regressive foundation models is often attributed to the diversity of their pre-training data. This is because, from a Bayesian standpoint, minimizing prediction error in such settings requires integrating over all plausible latent hypotheses consistent with observations. While this behavior is desirable in principle, it often proves too ambitious in practice: under high ambiguity, the number of plausible latent alternatives makes Bayes-optimal prediction computationally intractable. Cognitive science has long recognized this limitation, suggesting that under such conditions, heuristics or information-seeking strategies are preferable to exhaustive inference. Translating this insight to next-token prediction, we hypothesize that low- and high-ambiguity predictions pose different computational demands, making ambiguity-agnostic next-token prediction a detrimental inductive bias. To test this, we introduce MetaHMM, a synthetic sequence meta-learning benchmark with rich compositional structure and a tractable Bayesian oracle. We show that Transformers indeed struggle with high-ambiguity predictions across model sizes. Motivated by cognitive theories, we propose a method to convert pre-trained models into Monte Carlo predictors that decouple task inference from token prediction. Preliminary results show substantial gains in ambiguous contexts through improved capacity allocation and test-time scalable inference, though challenges remain.
Problem

Research questions and friction points this paper is trying to address.

Addresses computational intractability in Bayes-optimal next-token prediction under ambiguity
Proposes ambiguity-sensitive prediction to replace detrimental ambiguity-agnostic approaches
Introduces MetaHMM benchmark to test Transformers' high-ambiguity prediction limitations
Innovation

Methods, ideas, or system contributions that make the work stand out.

Ambiguity-sensitive next-token prediction approach
MetaHMM benchmark for meta-learning evaluation
Monte Carlo predictors for scalable inference
🔎 Similar Papers
No similar papers found.