Distributed Fractional Bayesian Learning for Adaptive Optimization

📅 2024-04-17
🏛️ arXiv.org
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses distributed adaptive optimization where agents possess only local cost functions and must jointly estimate unknown shared parameters. We propose the first unified theoretical framework integrating parameter estimation and optimal solution computation. Our “prediction-as-optimization” paradigm co-designs distributed fractional-order Bayesian learning with gradient descent: parameters are estimated via distributed logarithmic belief averaging, while decision variables are optimized synchronously using stochastic-approximation-type distributed gradient updates. We establish almost-sure convergence of both beliefs and decisions to the true parameters and global optimum, and derive a sublinear convergence rate for the belief sequence. Numerical experiments confirm the method’s effectiveness and robustness under parameter uncertainty. This work provides the first general modeling and analysis framework for distributed optimization with parametric uncertainty that simultaneously guarantees statistical inference consistency and optimization optimality.

Technology Category

Application Category

📝 Abstract
This paper considers a distributed adaptive optimization problem, where all agents only have access to their local cost functions with a common unknown parameter, whereas they mean to collaboratively estimate the true parameter and find the optimal solution over a connected network. A general mathematical framework for such a problem has not been studied yet. We aim to provide valuable insights for addressing parameter uncertainty in distributed optimization problems and simultaneously find the optimal solution. Thus, we propose a novel Prediction while Optimization scheme, which utilizes distributed fractional Bayesian learning through weighted averaging on the log-beliefs to update the beliefs of unknown parameters, and distributed gradient descent for renewing the estimation of the optimal solution. Then under suitable assumptions, we prove that all agents' beliefs and decision variables converge almost surely to the true parameter and the optimal solution under the true parameter, respectively. We further establish a sublinear convergence rate for the belief sequence. Finally, numerical experiments are implemented to corroborate the theoretical analysis.
Problem

Research questions and friction points this paper is trying to address.

Distributed adaptive optimization with local cost functions
Collaborative estimation of unknown common parameter
Simultaneous parameter learning and optimal solution finding
Innovation

Methods, ideas, or system contributions that make the work stand out.

Distributed fractional Bayesian learning for parameter estimation
Weighted averaging on log-beliefs to update beliefs
Distributed gradient descent for optimal solution
🔎 Similar Papers
No similar papers found.