Efficient Turing Machine Simulation with Transformers

๐Ÿ“… 2025-09-28
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
Existing constant-bit Transformer models simulating Turing machines (TMs) require ฮฉ(s(n)) inference steps per TM step, resulting in suboptimal computational efficiency. Method: We introduce the multi-queue Turing machine (MQTM) as an intermediate model, prove its equivalence to multi-tape TMs, and design a synchronous bridging mechanism. Integrating sparse attention with fixed geometric-offset positional encoding, we construct the first constant-bit Transformer capable of efficiently simulating (t(n), s(n))-bounded TMs. Contribution/Results: Our model achieves O(s(n)^c) inference steps (for some c < 1) and O(s(n)) context lengthโ€”nearly matching the theoretical lower bound. The core innovation lies in the synergistic co-design of MQTM-based modeling and lightweight attention, substantially narrowing the computational efficiency gap between neural architectures and Turing completeness.

Technology Category

Application Category

๐Ÿ“ Abstract
Constant bit-size Transformers are known to be Turing complete, but existing constructions require $Omega(s(n))$ chain-of-thought (CoT) steps per simulated Turing machine (TM) step, leading to impractical reasoning lengths. In this paper, we significantly reduce this efficiency gap by proving that any $(t(n),s(n))$-bounded multi-tape TM can be simulated by a constant bit-size Transformer with an optimal $O(s(n))$-long context window and only $O(s(n)^c)$ CoT steps per TM step, where $c>0$ can be made arbitrarily small by letting the Transformers'head-layer product sufficiently large. In addition, our construction shows that sparse attention with fixed geometric offsets suffices for efficient universal computation. Our proof leverages multi-queue TMs as a bridge. The main technical novelty is a more efficient simulation of multi-tape TMs by synchronous multi-queue TMs, improving both time and space complexity under stricter model assumptions.
Problem

Research questions and friction points this paper is trying to address.

Reduces Transformer simulation steps for Turing machines efficiently
Achieves optimal context window length for TM simulation
Demonstrates sparse attention suffices for universal computation
Innovation

Methods, ideas, or system contributions that make the work stand out.

Simulates Turing machines with optimal O(s(n)) context window
Uses sparse attention with fixed geometric offsets
Reduces CoT steps to O(s(n)^c) per TM step
๐Ÿ”Ž Similar Papers
No similar papers found.
Q
Qian Li
Shenzhen International Center For Industrial And Applied Mathematics, Shenzhen Research Institute of Big Data
Yuyi Wang
Yuyi Wang
ETH Zurich
Algorithm and theorymachine learningblockchainnovel applications