FIRM: Federated In-client Regularized Multi-objective Alignment for Large Language Models

📅 2025-11-21
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
In federated learning (FL), aligning large language models (LLMs) with multiple objectives—e.g., helpfulness and harmlessness—faces challenges including high communication overhead, severe client divergence, and absence of theoretical convergence guarantees. To address these, this paper proposes the first FL alignment framework supporting intra-client regularized multi-objective optimization. Our method eliminates the need for uploading multiple gradients, requiring only a single parameter set per communication round. It provides the first finite-time convergence guarantee for federated multi-objective alignment. By modeling the Pareto frontier and incorporating a divergence suppression mechanism, it significantly improves training stability and cross-client consistency. Experiments demonstrate superior reward trade-off performance over baselines and enable smooth, preference-guided objective adjustment.

Technology Category

Application Category

📝 Abstract
Aligning Large Language Models (LLMs) with human values often involves balancing multiple, conflicting objectives such as helpfulness and harmlessness. Training these models is computationally intensive, and centralizing the process raises significant data privacy concerns. Federated Learning (FL) offers a compelling alternative, but existing Federated Multi-Objective Optimization (FMOO) methods face severe communication bottlenecks as their reliance on transmitting multiple gradients to a server is unscalable for large models. We introduce FIRM (Federated In-client Regularized Multi-objective alignment), a novel algorithm that achieves both client disagreement drift mitigation and communication efficiency. In FIRM, each client locally solves a regularized multi-objective optimization problem. By directly mitigating client disagreement drift through in-client regularization, our method eliminates the need for the multi-gradient transmissions common in prior works. Consequently, clients need only to transmit a single set of adapted parameters, maintaining high communication efficiency. We prove that our algorithm converges to Pareto-stationary points and, to our knowledge, provide the first finite-time convergence guarantees for this federated multi-objective alignment setting. Empirically, we show that FIRM leads to smoother training dynamics, reduced client disagreement drift, and improved reward trade-offs compared to baselines. We further propose a method to incorporate a preference over the objectives and report empirical Pareto plots, demonstrating that FIRM can smoothly adapt trade-offs between objectives in response to specified preferences.
Problem

Research questions and friction points this paper is trying to address.

Balancing conflicting objectives like helpfulness and harmlessness in LLM alignment
Addressing communication bottlenecks in federated multi-objective optimization methods
Mitigating data privacy concerns and computational costs in centralized LLM training
Innovation

Methods, ideas, or system contributions that make the work stand out.

Federated learning with in-client regularization
Single parameter transmission for efficiency
Local multi-objective optimization for alignment
🔎 Similar Papers
No similar papers found.
F
Fatemeh Nourzad
Department of Electrical and Computer Engineering, The Ohio State University
A
Amirhossein Roknilamouki
Department of Electrical and Computer Engineering, The Ohio State University
Eylem Ekici
Eylem Ekici
Professor of Electrical and Computer Engineering, The Ohio State University
Wireless NetworksmmWaveV2XDynamic Spectrum Access
J
Jia Liu
Department of Electrical and Computer Engineering, The Ohio State University
N
Ness B. Shroff
Department of Electrical and Computer Engineering, The Ohio State University