๐ค AI Summary
Existing constant-bit Transformer models simulating Turing machines (TMs) require ฮฉ(s(n)) inference steps per TM step, resulting in suboptimal computational efficiency. Method: We introduce the multi-queue Turing machine (MQTM) as an intermediate model, prove its equivalence to multi-tape TMs, and design a synchronous bridging mechanism. Integrating sparse attention with fixed geometric-offset positional encoding, we construct the first constant-bit Transformer capable of efficiently simulating (t(n), s(n))-bounded TMs. Contribution/Results: Our model achieves O(s(n)^c) inference steps (for some c < 1) and O(s(n)) context lengthโnearly matching the theoretical lower bound. The core innovation lies in the synergistic co-design of MQTM-based modeling and lightweight attention, substantially narrowing the computational efficiency gap between neural architectures and Turing completeness.
๐ Abstract
Constant bit-size Transformers are known to be Turing complete, but existing constructions require $Omega(s(n))$ chain-of-thought (CoT) steps per simulated Turing machine (TM) step, leading to impractical reasoning lengths. In this paper, we significantly reduce this efficiency gap by proving that any $(t(n),s(n))$-bounded multi-tape TM can be simulated by a constant bit-size Transformer with an optimal $O(s(n))$-long context window and only $O(s(n)^c)$ CoT steps per TM step, where $c>0$ can be made arbitrarily small by letting the Transformers'head-layer product sufficiently large. In addition, our construction shows that sparse attention with fixed geometric offsets suffices for efficient universal computation. Our proof leverages multi-queue TMs as a bridge. The main technical novelty is a more efficient simulation of multi-tape TMs by synchronous multi-queue TMs, improving both time and space complexity under stricter model assumptions.