🤖 AI Summary
This work addresses the challenge of efficiently simulating a target probability distribution while ensuring strong functional computation performance in distributed function computation under limited public randomness (RDFC). It introduces, for the first time, deep learning into the RDFC framework by proposing an unsupervised autoencoder-based method trained to minimize the total variation distance between the output distribution and the unknown target distribution. The proposed approach significantly reduces communication overhead and outperforms conventional data compression schemes in both communication cost and functional computation accuracy, thereby establishing a new paradigm for efficient distributed computation under constrained randomness.
📝 Abstract
The randomized distributed function computation (RDFC) framework, which unifies many cutting-edge distributed computation and learning applications, is considered. An autoencoder (AE) architecture is proposed to minimize the total variation distance between the probability distribution simulated by the AE outputs and an unknown target distribution, using only data samples. We illustrate significantly high RDFC performance with communication load gains from our AEs compared to data compression methods. Our designs establish deep learning-based RDFC methods and aim to facilitate the use of RDFC methods, especially when the amount of common randomness is limited and strong function computation guarantees are required.