Preference Optimization for Reasoning with Pseudo Feedback

📅 2024-11-25
🏛️ arXiv.org
📈 Citations: 1
Influential: 0
📄 PDF
🤖 AI Summary
To address the scarcity of high-quality human-annotated data for mathematical reasoning and code generation tasks, this paper proposes a test-case-driven pseudo-feedback framework that requires no manual labeling. It leverages state-of-the-art LLMs for automatic annotation and employs multi-test-case expansion with self-consistency sampling to construct high-confidence preference pairs. These pairs are then integrated into a unified training pipeline combining DPO-based preference optimization, test-case pass-rate evaluation, and knowledge distillation to enhance reasoning capabilities efficiently. This work introduces the first dual-path pseudo-feedback paradigm specifically designed for reasoning tasks, substantially reducing reliance on human annotations. Evaluated on Mathstral-7B, our method achieves 68.6 accuracy on MATH (+10.3 absolute gain), outperforming NuminaMath-72B and GPT-4-Turbo; it also attains 90.3 on GSM8K, 42.3 on College Math, and 24.6 on LiveCodeBench—surpassing strong baselines including Claude-3-Haiku.

Technology Category

Application Category

📝 Abstract
Preference optimization techniques, such as Direct Preference Optimization (DPO), are frequently employed to enhance the reasoning capabilities of large language models (LLMs) in domains like mathematical reasoning and coding, typically following supervised fine-tuning. These methods rely on high-quality labels for reasoning tasks to generate preference pairs; however, the availability of reasoning datasets with human-verified labels is limited. In this study, we introduce a novel approach to generate pseudo feedback for reasoning tasks by framing the labeling of solutions to reason problems as an evaluation against associated test cases. We explore two forms of pseudo feedback based on test cases: one generated by frontier LLMs and the other by extending self-consistency to multi-test-case. We conduct experiments on both mathematical reasoning and coding tasks using pseudo feedback for preference optimization, and observe improvements across both tasks. Specifically, using Mathstral-7B as our base model, we improve MATH results from 58.3 to 68.6, surpassing both NuminaMath-72B and GPT-4-Turbo-1106-preview. In GSM8K and College Math, our scores increase from 85.6 to 90.3 and from 34.3 to 42.3, respectively. Building on Deepseek-coder-7B-v1.5, we achieve a score of 24.6 on LiveCodeBench (from 21.1), surpassing Claude-3-Haiku.
Problem

Research questions and friction points this paper is trying to address.

Optimizing reasoning with pseudo feedback
Enhancing LLMs in math and coding
Generating labels via test case evaluation
Innovation

Methods, ideas, or system contributions that make the work stand out.

Pseudo feedback generation
Multi-test-case consistency
Preference optimization enhancement