🤖 AI Summary
This paper addresses privacy-sensitive multi-agent collaborative decision-making, where conventional linear dueling bandits—relying on closed-form solutions—are inherently incompatible with federated learning. Method: We propose the first Federated Linear Dueling Bandits framework, integrating online gradient descent with federated averaging (FedAvg) to enable context-aware, distributed preference learning without sharing raw user data or assuming predefined parametric forms. Contribution/Results: We establish a sublinear upper bound on cumulative regret. Empirical evaluation on recommendation systems and large language model preference optimization demonstrates significantly improved convergence speed and decision accuracy, while rigorously preserving local data privacy through decentralized computation and parameter aggregation.
📝 Abstract
Contextual linear dueling bandits have recently garnered significant attention due to their widespread applications in important domains such as recommender systems and large language models. Classical dueling bandit algorithms are typically only applicable to a single agent. However, many applications of dueling bandits involve multiple agents who wish to collaborate for improved performance yet are unwilling to share their data. This motivates us to draw inspirations from federated learning, which involves multiple agents aiming to collaboratively train their neural networks via gradient descent (GD) without sharing their raw data. Previous works have developed federated linear bandit algorithms which rely on closed-form updates of the bandit parameters (e.g., the linear function parameter) to achieve collaboration. However, in linear dueling bandits, the linear function parameter lacks a closed-form expression and its estimation requires minimizing a loss function. This renders these previous methods inapplicable. In this work, we overcome this challenge through an innovative and principled combination of online gradient descent (for minimizing the loss function to estimate the linear function parameters) and federated learning, hence introducing the first federated linear dueling bandit algorithms. Through rigorous theoretical analysis, we prove that our algorithms enjoy a sub-linear upper bound on its cumulative regret. We also use empirical experiments to demonstrate the effectiveness of our algorithms and the practical benefit of collaboration.