🤖 AI Summary
Multilingual preference alignment suffers from negative interference due to conflicting optimization objectives during joint training—a problem lacking systematic investigation in prior work. To address this, we propose the Gradient Conflict Filtering (GCF) framework, which dynamically selects high-quality cross-lingual preference samples to mitigate interference. We further introduce gradient surgery–based multilingual update direction alignment—the first of its kind—and integrate sublinear gradient compression to balance alignment accuracy and memory efficiency. Evaluated on LLaMA3-8B and Gemma2-2B across ten languages, GCF significantly outperforms strong baselines and generalizes nearly losslessly to unseen languages. Our core contributions are threefold: (1) the first formal modeling of gradient conflict in multilingual preference alignment; (2) a scalable, efficient, and robust solution grounded in gradient analysis and compression; and (3) empirical validation of strong cross-lingual transfer without performance degradation.
📝 Abstract
Naive joint training of large language models (LLMs) for multilingual preference alignment can suffer from negative interference. This is a known issue in multilingual training, where conflicting objectives degrade overall performance. However, the impact of this phenomenon in the context of multilingual preference alignment remains largely underexplored. To address this issue, we propose CONGRAD, a scalable and effective filtering method that selects high-quality preference samples with minimal gradient conflicts across languages. Our method leverages gradient surgery to retain samples aligned with an aggregated multilingual update direction. Additionally, we incorporate a sublinear gradient compression strategy that reduces memory overhead during gradient accumulation. We integrate CONGRAD into self-rewarding framework and evaluate on LLaMA3-8B and Gemma2-2B across 10 languages. Results show that CONGRAD consistently outperforms strong baselines in both seen and unseen languages, with minimal alignment tax.