🤖 AI Summary
This work proposes and formally defines the problem of asymptotic subspace consensus: in dynamic networks, nodes must converge their outputs to a common subspace while remaining within the convex hull of their initial vectors, rather than converging to a single consensus point as in classical settings. Under a memoryless message adversary model, the study demonstrates that standard asymptotic consensus algorithms naturally degenerate into this subspace variant under weak connectivity conditions. By integrating tools from distributed algorithm analysis, convex geometry, and dynamic graph theory, the paper fully characterizes the solvability conditions for this problem and establishes, for the first time, tight upper and lower bounds on the rate at which the dimension of the consensus subspace contracts over time.
📝 Abstract
We introduce the problem of asymptotic subspace consensus, which requires the outputs of processes to converge onto a common subspace while remaining inside the convex hull of initial vectors.This is a relaxation of asymptotic consensus in which outputs have to converge to a single point, i.e., a zero-dimensional affine subspace.
We give a complete characterization of the solvability of asymptotic subspace consensus in oblivious message adversaries. In particular, we show that a large class of algorithms used for asymptotic consensus gracefully degrades to asymptotic subspace consensus in distributed systems with weaker assumptions on the communication network. We also present bounds on the rate by which a lower-than-initial dimension is reached.