🤖 AI Summary
This work addresses the optimization imbalance in multi-objective alignment caused by static linear scalarization, where fixed weights often lead models to overfit high-variance objectives—such as OCR—at the expense of perceptual goals. To mitigate this, the authors propose the APEX framework, which identifies two key imbalance mechanisms: “variance hijacking” and “gradient conflict.” APEX introduces a two-stage adaptive normalization scheme to stabilize heterogeneous reward signals and incorporates a P³ adaptive prioritization scheduler that dynamically balances learning potential, conflict penalties, and progress demands. Evaluated through fine-tuning Stable Diffusion 3.5 across four heterogeneous objectives, APEX achieves superior Pareto trade-offs: +1.31 in PickScore, +0.35 in DeQA, +0.53 in aesthetic score, while maintaining stable OCR accuracy.
📝 Abstract
Multi-objective alignment for text-to-image generation is commonly implemented via static linear scalarization, but fixed weights often fail under heterogeneous rewards, leading to optimization imbalance where models overfit high-variance, high-responsiveness objectives (e.g., OCR) while under-optimizing perceptual goals. We identify two mechanistic causes: variance hijacking, where reward dispersion induces implicit reweighting that dominates the normalized training signal, and gradient conflicts, where competing objectives produce opposing update directions and trigger seesaw-like oscillations. We propose APEX (Adaptive Priority-based Efficient X-objective Alignment), which stabilizes heterogeneous rewards with Dual-Stage Adaptive Normalization and dynamically schedules objectives via P^3 Adaptive Priorities that combine learning potential, conflict penalty, and progress need. On Stable Diffusion 3.5, APEX achieves improved Pareto trade-offs across four heterogeneous objectives, with balanced gains of +1.31 PickScore, +0.35 DeQA, and +0.53 Aesthetics while maintaining competitive OCR accuracy, mitigating the instability of multi-objective alignment.