Predicting Through Generation: Why Generation Is Better for Prediction

📅 2025-02-25
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses two key challenges in autoregressive generative forecasting: exposure bias and format mismatch. To tackle them, we propose PredGen—a unified framework incorporating task-adaptive adapters and scheduled sampling to ensure training-inference consistency—and the Writer-Director Alignment Loss (WDAL), which explicitly aligns the generated token sequence (“Writer”) with structured targets (“Director”), thereby bridging the semantic gap between discrete tokens and continuous/structured outputs. Unlike conventional pooling-based representations, PredGen preserves higher mutual information, better aligning with the pretraining objectives of large language models. Evaluated across diverse classification and regression benchmarks, PredGen consistently outperforms standard baselines, yielding significant improvements in textual coherence and numerical prediction accuracy. Our approach unifies generative modeling and structured prediction within a single, principled optimization framework.

Technology Category

Application Category

📝 Abstract
This paper argues that generating output tokens is more effective than using pooled representations for prediction tasks because token-level generation retains more mutual information. Since LLMs are trained on massive text corpora using next-token prediction, generation aligns naturally with their learned behavior. Using the Data Processing Inequality (DPI), we provide both theoretical and empirical evidence supporting this claim. However, autoregressive models face two key challenges when used for prediction: (1) exposure bias, where the model sees ground truth tokens during training but relies on its own predictions during inference, leading to errors, and (2) format mismatch, where discrete tokens do not always align with the tasks required output structure. To address these challenges, we introduce PredGen(Predicting Through Generating), an end to end framework that (i) uses scheduled sampling to reduce exposure bias, and (ii) introduces a task adapter to convert the generated tokens into structured outputs. Additionally, we introduce Writer-Director Alignment Loss (WDAL), which ensures consistency between token generation and final task predictions, improving both text coherence and numerical accuracy. We evaluate PredGen on multiple classification and regression benchmarks. Our results show that PredGen consistently outperforms standard baselines, demonstrating its effectiveness in structured prediction tasks.
Problem

Research questions and friction points this paper is trying to address.

Token generation improves prediction tasks
Addresses exposure bias in autoregressive models
Enhances structured output consistency
Innovation

Methods, ideas, or system contributions that make the work stand out.

Token-level generation retains more information
Scheduled sampling reduces exposure bias
Task adapter converts tokens to structured outputs
🔎 Similar Papers
No similar papers found.