🤖 AI Summary
This work addresses the pervasive issue of cross-objective interference in large language models during multi-objective alignment, where improving performance on one objective often degrades others. The study formally characterizes this interference and introduces an analytical framework based on the covariance between reward signals and scalarized scores. It reveals a local covariance law and establishes global convergence conditions under non-convex optimization, incorporating Polyak–Łojasiewicz assumptions and clipped surrogate objectives. Building on these insights, the authors propose CTWA, a plug-and-play method that preserves positive covariance to mitigate interference. Extensive experiments across multiple scalarization algorithms demonstrate the ubiquity of interference and show that CTWA consistently enhances overall multi-objective alignment performance.
📝 Abstract
We study a persistent failure mode in multi-objective alignment for large language models (LLMs): training improves performance on only a subset of objectives while causing others to degrade. We formalize this phenomenon as cross-objective interference and conduct the first systematic study across classic scalarization algorithms, showing that interference is pervasive and exhibits strong model dependence. To explain this phenomenon, we derive a local covariance law showing that an objective improves at first order when its reward exhibits positive covariance with the scalarized score. We extend this analysis to clipped surrogate objectives used in modern alignment, demonstrating that the covariance law remains valid under mild conditions despite clipping. Building on this analysis, we propose Covariance Targeted Weight Adaptation (CTWA), a plug-and-play method that maintains positive covariance between objective rewards and the training signal to effectively mitigate cross-objective interference. Finally, we complement these local improvement conditions with a global convergence analysis under the Polyak--\L{}ojasiewicz condition, establishing when non-convex scalarized optimization achieves global convergence and how cross-objective interference depends on specific model geometric properties.