🤖 AI Summary
This study addresses the problem of decentralized secure aggregation (DSA), wherein the privacy of individual inputs—except for their sum—must be preserved even when any user colludes with up to T other users. Leveraging an information-theoretic security model combined with secret sharing and network coding techniques, this work establishes the first complete characterization of the optimal rate region for DSA and rigorously derives fundamental lower bounds on both communication and secret key overheads. The main contributions include proving that, to securely compute a 1-bit sum, each user must transmit at least 1 bit and store at least 1 bit of secret key, while the entire system requires at least K−1 bits of independent secret key material. These results provide a foundational theoretical basis for provably secure federated learning systems.
📝 Abstract
Motivated by the increasing demand for data security in decentralized federated learning (FL) and stochastic optimization, we formulate and investigate the problem of information-theoretic \emph{decentralized secure aggregation} (DSA). Specifically, we consider a network of $K$ interconnected users, each holding a private input, representing, for example, local model updates in FL, who aim to simultaneously compute the sum of all inputs while satisfying the security requirement that no user, even when colluding with up to $T$ others, learns anything beyond the intended sum. We characterize the optimal rate region, which specifies the minimum achievable communication and secret key rates for DSA. In particular, we show that to securely compute one bit of the desired input sum, each user must (i) transmit at least one bit to all other users, (ii) hold at least one bit of secret key, and (iii) all users must collectively hold no fewer than $K - 1$ independent key bits. Our result establishes the fundamental performance limits of DSA and offers insights into the design of provably secure and communication-efficient protocols for distributed learning systems.