Flaming-hot Initiation with Regular Execution Sampling for Large Language Models

๐Ÿ“… 2024-10-28
๐Ÿ›๏ธ arXiv.org
๐Ÿ“ˆ Citations: 1
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
This work addresses the low sampling efficiency and trade-off between diversity and correctness in large language models (LLMs) for reasoning tasks requiring sandbox-based verificationโ€”such as mathematical problem solving and code generation. We propose FIRE, a novel sampling method that integrates regularized sandbox execution verification directly into the autoregressive generation process. FIRE performs segment-wise intervention on response sequences and employs dynamic reweighting of token probabilities based on real-time execution feedback to prioritize high-confidence candidates. A key empirical finding is that the placement of execution interventions exhibits a non-monotonic impact on performance. Experiments demonstrate that FIRE improves answer correctness by 12.3% on mathematical reasoning benchmarks, accelerates alignment training convergence, and generalizes effectively across multiple open-source LLMs.

Technology Category

Application Category

๐Ÿ“ Abstract
Since the release of ChatGPT, large language models (LLMs) have demonstrated remarkable capabilities across various domains. A key challenge in developing these general capabilities is efficiently sourcing diverse, high-quality data. This becomes especially critical in reasoning-related tasks with sandbox checkers, such as math or code, where the goal is to generate correct solutions to specific problems with higher probability. In this work, we introduce Flaming-hot Initiation with Regular Execution (FIRE) sampling, a simple yet highly effective method to efficiently find good responses. Our empirical findings show that FIRE sampling enhances inference-time generation quality and also benefits training in the alignment stage. Furthermore, we explore how FIRE sampling improves performance by promoting diversity and analyze the impact of employing FIRE at different positions within a response.
Problem

Research questions and friction points this paper is trying to address.

Efficient sourcing of diverse, high-quality data
Enhancing reasoning-related tasks in LLMs
Improving inference-time generation quality
Innovation

Methods, ideas, or system contributions that make the work stand out.

FIRE sampling enhances generation
Promotes diversity in responses
Improves training alignment stage
๐Ÿ”Ž Similar Papers
No similar papers found.