Quasi-random Multi-Sample Inference for Large Language Models

πŸ“… 2024-11-09
πŸ›οΈ arXiv.org
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
To address the poor parallelism of beam search and insufficient diversity in sampling-based decoding for large language models (LLMs), this paper introduces Arithmetic Sampling: a novel decoding method that leverages the implicit arithmetic codebook defined by LLMs to generate quasi-random, high-diversity token sequences. It is the first work to integrate quasi-random coding into LLM decoding. The method enables fully parallelized inference without additional computational overhead. Evaluated on chain-of-thought self-consistency and minimum Bayes risk translation decoding, it achieves consistent improvementsβ€”e.g., +3–5 percentage points in accuracy on GSM8K and +0.45–0.89 in COMET score on WMT19. Its core contribution lies in breaking the long-standing trade-off between parallelism and sample diversity in LLM decoding, establishing a new paradigm for efficient and robust multi-sample inference.

Technology Category

Application Category

πŸ“ Abstract
Large language models (LLMs) are often equipped with multi-sample decoding strategies. An LLM implicitly defines an arithmetic code book, facilitating efficient and embarrassingly parallelizable extbf{arithmetic sampling} to produce multiple samples using quasi-random codes. Traditional text generation methods, such as beam search and sampling-based techniques, have notable limitations: they lack parallelizability or diversity of sampled sequences. This study explores the potential of arithmetic sampling, contrasting it with ancestral sampling across two decoding tasks that employ multi-sample inference: chain-of-thought reasoning with self-consistency and machine translation with minimum Bayes risk decoding. Our results demonstrate that arithmetic sampling produces more diverse samples, significantly improving reasoning and translation performance as the sample size increases. We observe a $mathbf{3 ext{-}5%}$ point increase in accuracy on the GSM8K dataset and a $mathbf{0.45 ext{-}0.89%}$ point increment in COMET score for WMT19 tasks using arithmetic sampling without any significant computational overhead.
Problem

Research questions and friction points this paper is trying to address.

Enhancing diversity in LLM multi-sample decoding strategies
Addressing limitations of traditional text generation methods
Improving reasoning and translation performance via arithmetic sampling
Innovation

Methods, ideas, or system contributions that make the work stand out.

Quasi-random codes enable parallel arithmetic sampling
Arithmetic sampling enhances diversity in generated sequences
No significant computational overhead with arithmetic sampling
πŸ”Ž Similar Papers
No similar papers found.