🤖 AI Summary
To address the challenge of inefficient migration of existing applications to Trusted Execution Environments (TEEs), this paper proposes the first LLM-driven automated TEE migration framework. The method leverages large language models to perform semantic identification of sensitive functions, cross-language partitioning (between Java and Python), TEE-platform-agnostic abstraction (supporting Intel SGX and AMD SEV), and secure code generation with formal verification. Key contributions include: (1) the first benchmark dataset comprising 385 manually annotated sensitive functions; (2) end-to-end support for cross-language and cross-platform migration; and (3) strong empirical performance—achieving an F1-score of 0.91 on benchmark evaluation, with successful Java-to-TEE and Python-to-TEE migration rates of 90% and 83%, respectively, and full functional correctness of generated code on both SGX and SEV. This framework significantly lowers the barrier to TEE security integration and enhances developer accessibility.
📝 Abstract
Trusted Execution Environments (TEEs) isolate a special space within a device's memory that is not accessible to the normal world (also known as Untrusted Environment), even when the device is compromised. Thus, developers can utilize TEEs to provide strong security guarantees for their programs, making sensitive operations like encrypted data storage, fingerprint verification, and remote attestation protected from malicious attacks. Despite the strong protections offered by TEEs, adapting existing programs to leverage such security guarantees is non-trivial, often requiring extensive domain knowledge and manual intervention, which makes TEEs less accessible to developers. This motivates us to design AutoTEE, the first Large Language Model (LLM)-enabled approach that can automatically identify, partition, transform, and port sensitive functions into TEEs with minimal developer intervention. By manually reviewing 68 repositories, we constructed a benchmark dataset consisting of 385 sensitive functions eligible for transformation, on which AutoTEE achieves a high F1 score of 0.91. AutoTEE effectively transforms these sensitive functions into their TEE-compatible counterparts, achieving success rates of 90% and 83% for Java and Python, respectively. We further provide a mechanism to automatically port the transformed code to different TEE platforms, including Intel SGX and AMD SEV, demonstrating that the transformed programs run successfully and correctly on these platforms.