Two for One, One for All: Deterministic LDC-based Robust Computation in Congested Clique

📅 2025-08-12
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This paper addresses fault-tolerant computation in the congested clique model under adversarial crash failures, where a fixed fraction α < 1 of nodes may fail. Methodologically, it introduces the first deterministic compiler via: (i) a derandomized construction of locally decodable codes (LDCs), yielding the first deterministic locally decodable encoding; (ii) adaptive multiplicative querying, load-balanced scheduling, and dynamic node replacement—jointly ensuring decoding robustness and communication efficiency. The key contribution is the first optimal fault-tolerant compilation for circuits with known erasure locations: for any circuit of depth *d*, width *ω*, and total fan-out *Δ*, it compiles a *T*-round algorithm into a fault-tolerant execution requiring *T*²·*n*<sup>*o*(1)</sup> rounds, with time complexity *d*·⌈*ω*/*n*² + *Δ*/*n*⌉·2<sup>*O*(√log *n* log log *n*)</sup>. This strictly improves upon all prior randomized approaches.

Technology Category

Application Category

📝 Abstract
We design a deterministic compiler that makes any computation in the Congested Clique model robust to a constant fraction $α<1$ of adversarial crash faults. In particular, we show how a network of $n$ nodes can compute any circuit of depth $d$, width $ω$, and gate total fan $Δ$, in $dcdotlceilfracω{n^2}+fracΔ{n} ceilcdot 2^{O(sqrt{log{n}}loglog{n})}$ rounds in such a faulty model. As a corollary, any $T$-round Congested Clique algorithm can be compiled into an algorithm that completes in $T^2 n^{o(1)}$ rounds in this model. Our compiler obtains resilience to node crashes by coding information across the network, where we leverage locally-decodable codes (LDCs) to maintain a low complexity overhead, as these allow recovering the information needed at each computational step by querying only small parts of the codeword. The main technical contribution is that because erasures occur in known locations, which correspond to crashed nodes, we can derandomize classical LDC constructions by deterministically selecting query sets that avoid sufficiently many erasures. Moreover, when decoding multiple codewords in parallel, our derandomization load-balances the queries per-node, thereby preventing congestion and maintaining a low round complexity. Deterministic decoding of LDCs presents a new challenge: the adversary can target precisely the (few) nodes that are queried for decoding a certain codeword. We overcome this issue via an adaptive doubling strategy: if a decoding attempt for a codeword fails, the node doubles the number of its decoding attempts. Similarly, when the adversary crashes the decoding node itself, we replace it dynamically with two other non-crashed nodes. By carefully combining these two doubling processes, we overcome the challenges posed by the combination of a deterministic LDC with a worst case pattern of crashes.
Problem

Research questions and friction points this paper is trying to address.

Design deterministic compiler for fault-tolerant Congested Clique computation
Leverage LDCs to recover data efficiently despite node crashes
Derandomize LDC queries to balance load and prevent congestion
Innovation

Methods, ideas, or system contributions that make the work stand out.

Deterministic compiler for robust computation
Locally-decodable codes for low complexity
Adaptive doubling strategy for fault tolerance
🔎 Similar Papers