Self-Consistency Preference Optimization

📅 2024-11-06
🏛️ arXiv.org
📈 Citations: 10
Influential: 2
📄 PDF
🤖 AI Summary
Existing self-alignment methods suffer from unreliable reward signals in complex reasoning tasks due to the absence of human annotations. To address this, we propose ScPO (Self-Consistency Preference Optimization), the first framework to elevate self-consistency from a decoding heuristic to an explicit training objective—enabling fully automated preference learning without human labels. ScPO generates diverse answer candidates via multi-path sampling, defines a consistency-based metric to construct implicit preference pairs, and optimizes the model’s output distribution through contrastive learning. Evaluated on GSM8K, MATH, and ZebraLogic, ScPO substantially outperforms conventional reward modeling approaches. Notably, fine-tuning Llama-3 8B with ScPO surpasses the performance of significantly larger models—including Llama-3 70B, Gemma-2 27B, and Claude-3 Haiku—demonstrating exceptional parameter efficiency and scalability.

Technology Category

Application Category

📝 Abstract
Self-alignment, whereby models learn to improve themselves without human annotation, is a rapidly growing research area. However, existing techniques often fail to improve complex reasoning tasks due to the difficulty of assigning correct rewards. An orthogonal approach that is known to improve correctness is self-consistency, a method applied at inference time based on multiple sampling in order to find the most consistent answer. In this work, we extend the self-consistency concept to help train models. We thus introduce self-consistency preference optimization (ScPO), which iteratively trains consistent answers to be preferred over inconsistent ones on unsupervised new problems. We show ScPO leads to large improvements over conventional reward model training on reasoning tasks such as GSM8K and MATH, closing the gap with supervised training with gold answers or preferences, and that combining ScPO with standard supervised learning improves results even further. On ZebraLogic, ScPO finetunes Llama-3 8B to be superior to Llama-3 70B, Gemma-2 27B, and Claude-3 Haiku.
Problem

Research questions and friction points this paper is trying to address.

Improving complex reasoning tasks without human annotation
Extending self-consistency concept to train models effectively
Closing performance gap with supervised training methods
Innovation

Methods, ideas, or system contributions that make the work stand out.

Extends self-consistency concept to model training
Optimizes preference for consistent answers iteratively
Combines with supervised learning for enhanced performance