Don't Throw Away Your Pretrained Model

📅 2025-10-10
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Alignment training enhances language models’ reasoning and instruction-following capabilities but often degrades creativity and calibration—attributes where base models excel. Method: We propose Switch Generation, the first framework enabling fine-grained, dynamic collaboration between pre-trained and aligned models during generation. A reinforcement-learned switcher LM orchestrates alternating token-level generation between the two models, selectively leveraging their complementary strengths. The approach integrates multi-model pipelining and cross-task imitation learning, enabling reuse of otherwise discarded models without retraining. Contribution/Results: Evaluated on 18 datasets, Switch Generation outperforms eight baselines by an average of 12.9%. It significantly surpasses individual models across 16 diverse tasks—including creative writing, calibrated prediction, and compositional reasoning—demonstrating strong generalization, robustness, and emergent combinatorial reasoning capabilities.

Technology Category

Application Category

📝 Abstract
Alignment training has tradeoffs: it helps language models (LMs) gain in reasoning and instruction following but might lose out on skills such as creativity and calibration, where unaligned base models are better at. We aim to make the best of both worlds through model collaboration, where different models in the training pipeline collaborate and complement each other. Since LM responses feature interleaving skills that favor different models, we propose Switch Generation, where pretrained and aligned model versions take turns to ``speak'' in a response sequence. Specifically, we train a switcher LM by learning from outcomes of choosing different models to generate the next segment across diverse queries and contexts. At inference time, the switcher LM guides different model checkpoints to dynamically generate the next segment where their strengths are most needed. Extensive experiments with 8 model collaboration baselines and 18 datasets show that 1) model collaboration consistently outperforms individual models on 16 out of 18 tasks, and 2) Switch Generation further outperforms baselines by 12.9% on average. Further analysis reveals that Switch Generation discovers compositional skills to solve problems where individual models struggle and generalizes to unseen models and tasks, reusing and repurposing by-products in expensive model training pipelines that are otherwise discarded.
Problem

Research questions and friction points this paper is trying to address.

Addresses tradeoffs between alignment gains and skill losses in language models
Proposes model collaboration to combine strengths of pretrained and aligned models
Introduces dynamic switching mechanism for optimal skill utilization during generation
Innovation

Methods, ideas, or system contributions that make the work stand out.

Switch Generation enables pretrained and aligned models to alternate
Switcher LM dynamically selects optimal model for each segment
Model collaboration outperforms individual models across diverse tasks
🔎 Similar Papers
No similar papers found.