MapReduce LoRA: Advancing the Pareto Front in Multi-Preference Optimization for Generative Models

πŸ“… 2025-11-25
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
To address the β€œalignment tax”—performance degradation in non-targeted dimensions when optimizing for a single preference in multi-preference alignmentβ€”this paper proposes the MapReduce LoRA and Reward-aware Token Embedding (RaTE) framework. It employs parallel training of preference-specific LoRA experts, iteratively merges them to update a shared base model, and introduces reward-aware token embeddings for fine-grained preference modeling. The framework unifies support across text-to-image, text-to-video, and language generation tasks. On Stable Diffusion and FLUX.1-dev, it improves GenEval, PickScore, and OCR metrics by 36.1%, 4.6%, and 55.7%, respectively; on HunyuanVideo, it enhances visual and motion quality by 48.1% and 90.0%; and on Llama-2 7B, it boosts helpfulness and harmlessness by 43.4% and 136.7%. These gains significantly advance the multi-preference Pareto frontier.

Technology Category

Application Category

πŸ“ Abstract
Reinforcement learning from human feedback (RLHF) with reward models has advanced alignment of generative models to human aesthetic and perceptual preferences. However, jointly optimizing multiple rewards often incurs an alignment tax, improving one dimension while degrading others. To address this, we introduce two complementary methods: MapReduce LoRA and Reward-aware Token Embedding (RaTE). MapReduce LoRA trains preference-specific LoRA experts in parallel and iteratively merges them to refine a shared base model; RaTE learns reward-specific token embeddings that compose at inference for flexible preference control. Experiments on Text-to-Image generation (Stable Diffusion 3.5 Medium and FLUX.1-dev) show improvements of 36.1%, 4.6%, and 55.7%, and 32.7%, 4.3%, and 67.1% on GenEval, PickScore, and OCR, respectively. On Text-to-Video generation (HunyuanVideo), visual and motion quality improve by 48.1% and 90.0%, respectively. On the language task, Helpful Assistant, with Llama-2 7B, helpful and harmless improve by 43.4% and 136.7%, respectively. Our framework sets a new state-of-the-art multi-preference alignment recipe across modalities.
Problem

Research questions and friction points this paper is trying to address.

Optimizing multiple rewards jointly causes alignment tax
Improving one preference dimension degrades other dimensions
Achieving flexible preference control across different modalities
Innovation

Methods, ideas, or system contributions that make the work stand out.

MapReduce LoRA trains and merges preference-specific LoRA experts
Reward-aware Token Embedding learns reward-specific token embeddings
Framework enables flexible multi-preference alignment across modalities
πŸ”Ž Similar Papers
No similar papers found.