🤖 AI Summary
Existing cryptographic schemes face fundamental bottlenecks in untrusted environments—including an intractable privacy-efficiency trade-off, poor scalability, and high computational complexity. To address these challenges, this paper proposes the Trusted Capability Model Environment (TCME): a novel privacy-preserving computing paradigm that employs high-capability, stateless machine learning models as trusted computational primitives. TCME replaces traditional trusted third parties and cryptographic primitives via controlled I/O interfaces, explicit information-flow constraints, and stateless interaction protocols. Crucially, it is the first framework to leverage strong ML models as foundational infrastructure for private inference—achieving efficiency without relying on multi-party computation or zero-knowledge proofs. Experimental results demonstrate that TCME supports diverse private inference tasks infeasible for conventional cryptography and solves certain classical cryptographic problems, thereby significantly broadening the applicability and practicality of secure computation.
📝 Abstract
We often interact with untrusted parties. Prioritization of privacy can limit the effectiveness of these interactions, as achieving certain goals necessitates sharing private data. Traditionally, addressing this challenge has involved either seeking trusted intermediaries or constructing cryptographic protocols that restrict how much data is revealed, such as multi-party computations or zero-knowledge proofs. While significant advances have been made in scaling cryptographic approaches, they remain limited in terms of the size and complexity of applications they can be used for. In this paper, we argue that capable machine learning models can fulfill the role of a trusted third party, thus enabling secure computations for applications that were previously infeasible. In particular, we describe Trusted Capable Model Environments (TCMEs) as an alternative approach for scaling secure computation, where capable machine learning model(s) interact under input/output constraints, with explicit information flow control and explicit statelessness. This approach aims to achieve a balance between privacy and computational efficiency, enabling private inference where classical cryptographic solutions are currently infeasible. We describe a number of use cases that are enabled by TCME, and show that even some simple classic cryptographic problems can already be solved with TCME. Finally, we outline current limitations and discuss the path forward in implementing them.