π€ AI Summary
This work addresses the challenge of collaborative learning of a global environmental function in multi-robot systems operating under limited local observations and communication. To this end, the authors propose DistGP, a novel method that constructs a ring-structured sparse Gaussian process factor graph aligned with the systemβs communication topology, thereby overcoming the limitations of conventional tree-based structures. DistGP enables asynchronous, distributed online learning via Gaussian belief propagation and supports dynamic communication topologies and continual learning. The approach achieves accuracy comparable to centralized batch models while significantly outperforming distributed neural network methods such as DiNNO in communication-sparse scenarios, demonstrating superior robustness and adaptability.
π Abstract
We propose DistGP: a multi-robot learning method for collaborative learning of a global function using only local experience and computation. We utilise a sparse Gaussian process (GP) model with a factorisation that mirrors the multi-robot structure of the task, and admits distributed training via Gaussian belief propagation (GBP). Our loopy model outperforms Tree-Structured GPs \cite{bui2014tree} and can be trained online and in settings with dynamic connectivity. We show that such distributed, asynchronous training can reach the same performance as a centralised, batch-trained model, albeit with slower convergence. Last, we compare to DiNNO \cite{yu2022dinno}, a distributed neural network (NN) optimiser, and find DistGP achieves superior accuracy, is more robust to sparse communication and is better able to learn continually.