🤖 AI Summary
Large language model (LLM) inference in Decentralized Physical Infrastructure Networks (DePIN) faces a verifiability trilemma—trade-offs among computational integrity, low latency, and cost efficiency.
Method: This paper proposes TEE-Rollup, a hybrid architecture integrating NVIDIA H100’s Trusted Execution Environment (TEE) for sub-second provisional finality, an optimistic rollup framework, randomized zero-knowledge spot-checking, and hardware-enforced remote attestation with execution trace binding. It introduces Proof of Efficient Attribution (PoEA), a novel consensus mechanism.
Contribution/Results: PoEA achieves Byzantine fault tolerance while resisting model degradation and reward manipulation attacks. It delivers 99% of centralized throughput, reduces per-query inference cost to $0.07, and empirically withstands transient hardware vulnerabilities. Unlike ZKML—whose overhead is superlinear—or pure optimistic rollups—which suffer from long dispute windows—TEE-Rollup is the first to simultaneously achieve high throughput, low latency, and low cost under strong security assumptions.
📝 Abstract
The rapid integration of Large Language Models (LLMs) into decentralized physical infrastructure networks (DePIN) is currently bottlenecked by the Verifiability Trilemma, which posits that a decentralized inference system cannot simultaneously achieve high computational integrity, low latency, and low cost. Existing cryptographic solutions, such as Zero-Knowledge Machine Learning (ZKML), suffer from superlinear proving overheads (O(k NlogN)) that render them infeasible for billionparameter models. Conversely, optimistic approaches (opML) impose prohibitive dispute windows, preventing real-time interactivity, while recent "Proof of Quality" (PoQ) paradigms sacrifice cryptographic integrity for subjective semantic evaluation, leaving networks vulnerable to model downgrade attacks and reward hacking. In this paper, we introduce Optimistic TEE-Rollups (OTR), a hybrid verification protocol that harmonizes these constraints. OTR leverages NVIDIA H100 Confidential Computing Trusted Execution Environments (TEEs) to provide sub-second Provisional Finality, underpinned by an optimistic fraud-proof mechanism and stochastic Zero-Knowledge spot-checks to mitigate hardware side-channel risks. We formally define Proof of Efficient Attribution (PoEA), a consensus mechanism that cryptographically binds execution traces to hardware attestations, thereby guaranteeing model authenticity. Extensive simulations demonstrate that OTR achieves 99% of the throughput of centralized baselines with a marginal cost overhead of $0.07 per query, maintaining Byzantine fault tolerance against rational adversaries even in the presence of transient hardware vulnerabilities.