Lower Bounds for Non-adaptive Local Computation Algorithms

📅 2025-05-01
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This paper establishes tight lower bounds on the query complexity of non-adaptive Local Computation Algorithms (LCAs) for computing constant-factor approximations to Maximum Matching (MM) and Minimum Vertex Cover (MVC). Addressing the gap between the state-of-the-art non-adaptive LCA complexity—Δ<sup>O(log Δ / log log Δ)</sup>—and the poly(Δ) complexity achievable by adaptive LCAs, the work provides the first rigorous separation: it proves a matching Δ<sup>Ω(log Δ / log log Δ)</sup> lower bound, closing the gap with the Parnas–Ron upper bound. Technically, the proof combines sublinear-time lower-bound techniques—based on acyclic subgraph coupling—with distributed lower-bound methods—via an enhanced Kuhn–Moscibroda–Wattenhofer (KMW) graph construction. The result refutes the possibility of accelerating large-scale parallel MM approximation in Massively Parallel Computation (MPC) via optimized non-adaptive LCAs. Moreover, it implies that any improvement upon the Parnas–Ron lower bound for MVC would fundamentally advance the design of sublinear-space MPC algorithms.

Technology Category

Application Category

📝 Abstract
We study *non-adaptive* Local Computation Algorithms (LCA). A reduction of Parnas and Ron (TCS'07) turns any distributed algorithm into a non-adaptive LCA. Plugging known distributed algorithms, this leads to non-adaptive LCAs for constant approximations of maximum matching (MM) and minimum vertex cover (MVC) with complexity $Delta^{O(log Delta / log log Delta)}$, where $Delta$ is the maximum degree of the graph. Allowing adaptivity, this bound can be significantly improved to $ ext{poly}(Delta)$, but is such a gap necessary or are there better non-adaptive LCAs? Adaptivity as a resource has been studied extensively across various areas. Beyond this, we further motivate the study of non-adaptive LCAs by showing that even a modest improvement over the Parnas-Ron bound for the MVC problem would have major implications in the Massively Parallel Computation (MPC) setting; It would lead to faster truly sublinear space MPC algorithms for approximate MM, a major open problem of the area. Our main result is a lower bound that rules out this avenue for progress. We prove that $Delta^{Omega(log Delta / log log Delta)}$ queries are needed for any non-adaptive LCA computing a constant approximation of MM or MVC. This is the first separation between non-adaptive and adaptive LCAs, and already matches (up to constants in the exponent) the algorithm obtained by the black-box reduction of Parnas and Ron. Our proof blends techniques from two separate lines of work: sublinear time lower bounds and distributed lower bounds. Particularly, we adopt techniques such as couplings over acyclic subgraphs from the recent sublinear time lower bounds of Behnezhad, Roghani, and Rubinstein (STOC'23, FOCS'23, STOC'24). We apply these techniques to a very different instance, (a modified version of) the construction of Kuhn, Moscibroda and Wattenhoffer (JACM'16) from distributed computing.
Problem

Research questions and friction points this paper is trying to address.

Studies non-adaptive LCAs for graph problems like MM and MVC
Explores gap between non-adaptive and adaptive LCA complexities
Proves lower bounds for non-adaptive LCAs in graph approximations
Innovation

Methods, ideas, or system contributions that make the work stand out.

Non-adaptive LCAs for graph approximations
Lower bounds for MVC and MM queries
Blending sublinear and distributed techniques
🔎 Similar Papers
No similar papers found.
Amir Azarmehr
Amir Azarmehr
Northeastern University
Theoretical Computer Science
S
Soheil Behnezhad
Northeastern University, Boston, Massachusetts, USA
A
Alma Ghafari
Northeastern University, Boston, Massachusetts, USA
Madhu Sudan
Madhu Sudan
Gordon McKay Professor of Computer Science, Harvard University