🤖 AI Summary
Existing large language models (LLMs) suffer performance degradation in long-text generation due to suboptimal token-level integration strategies. Method: This paper proposes SAFE, the first framework for semantic-criticality–driven selective integration. SAFE dynamically identifies high-value tokens by jointly modeling tokenization inconsistency and probability distribution consistency, and enhances output stability via probabilistic sharpening. It further introduces a multi-model subword-level probability fusion mechanism—bypassing costly per-token integration. Results: On benchmarks including MATH500 and BBH, SAFE achieves superior accuracy by integrating fewer than 1% of tokens, outperforming state-of-the-art methods. It also delivers substantial gains in inference efficiency. SAFE establishes a new paradigm for efficient and robust collaborative generation among large models.
📝 Abstract
Ensembling Large Language Models (LLMs) has gained attention as a promising approach to surpass the performance of individual models by leveraging their complementary strengths. In particular, aggregating models' next-token probability distributions to select the next token has been shown to be effective in various tasks. However, while successful for short-form answers, its application to long-form generation remains underexplored. In this paper, we show that using existing ensemble methods in long-form generation requires a careful choice of ensembling positions, since the standard practice of ensembling at every token often degrades performance. We identify two key factors for determining these positions: tokenization mismatch across models and consensus in their next-token probability distributions. Based on this, we propose SAFE, (Stable And Fast LLM Ensembling), a framework that selectively ensembles by jointly considering these factors. To further improve stability, we introduce a probability sharpening strategy that consolidates probabilities spread across multiple sub-word tokens representing the same word into a single representative token. Our experiments on diverse benchmarks, including MATH500 and BBH, demonstrate that SAFE outperforms existing methods in both accuracy and efficiency, with gains achieved even when ensembling fewer than 1% of tokens.