SparsePO: Controlling Preference Alignment of LLMs via Sparse Token Masks

📅 2024-10-07
🏛️ arXiv.org
📈 Citations: 5
Influential: 1
📄 PDF
🤖 AI Summary
Existing preference optimization (PO) methods assign uniform weights to all tokens, overlooking the fact that human preferences are often dominated by salient keywords or phrases (e.g., toxic expressions). To address this, we propose SparsePO—a novel framework introducing the first end-to-end learnable sparse token-level masking mechanism. It dynamically reweights both KL divergence and reward signals in a task-aware, fine-grained alignment process; mask sparsity is adaptively determined to balance signal focus and training stability. Built upon an enhanced Direct Preference Optimization objective and reference-model-guided mask generation, SparsePO significantly improves preference consistency across diverse tasks—including sentiment control, dialogue, summarization, and text-to-code generation. It achieves up to a 2-percentage-point gain in reasoning capability and consistently outperforms state-of-the-art token-level and response-level PO methods.

Technology Category

Application Category

📝 Abstract
Preference Optimization (PO) has proven an effective step for aligning language models to human-desired behaviors. Current variants, following the offline Direct Preference Optimization objective, have focused on a strict setting where all tokens are contributing signals of KL divergence and rewards to the loss function. However, human preference is not affected by each word in a sequence equally but is often dependent on specific words or phrases, e.g. existence of toxic terms leads to non-preferred responses. Based on this observation, we argue that not all tokens should be weighted equally during PO and propose a flexible objective termed SparsePO, that aims to automatically learn to weight the KL divergence and reward corresponding to each token during PO training. We propose two different variants of weight-masks that can either be derived from the reference model itself or learned on the fly. Notably, our method induces sparsity in the learned masks, allowing the model to learn how to best weight reward and KL divergence contributions at the token level, learning an optimal level of mask sparsity. Extensive experiments on multiple domains, including sentiment control, dialogue, text summarization and text-to-code generation, illustrate that our approach assigns meaningful weights to tokens according to the target task, generates more responses with the desired preference and improves reasoning tasks by up to 2 percentage points compared to other token- and response-level PO methods.
Problem

Research questions and friction points this paper is trying to address.

Optimizing token-level alignment in preference optimization algorithms
Automatically learning sparse masks for KL divergence and reward balancing
Improving model alignment without compromising reasoning or response quality
Innovation

Methods, ideas, or system contributions that make the work stand out.

Sparse token masks control preference alignment in LLMs
Automatically weights KL divergence and reward per token
Learns optimal sparsity for token-level preference balancing
🔎 Similar Papers
No similar papers found.