🤖 AI Summary
To address the risks of model parameter leakage and the security-efficiency trade-off in deploying large language models (LLMs) on mobile devices, this paper proposes a lightweight secure inference framework leveraging Arm TrustZone. Methodologically, it introduces a TEE-REE co-driven architecture enabling time-division sharing of the NPU to minimize the trusted computing base; designs a pipelined parameter prefetching mechanism integrating deterministic memory access prediction, on-demand decryption, and encrypted memory management; and develops a lightweight NPU data-plane driver. Evaluated on a Rockchip platform, the framework reduces first-token latency by 90.9% and improves decoding throughput by up to 23.2%, achieving a significant balance between robust model intellectual property protection and high-performance local inference.
📝 Abstract
Large Language Models (LLMs) deployed on mobile devices offer benefits like user privacy and reduced network latency, but introduce a significant security risk: the leakage of proprietary models to end users.
To mitigate this risk, we propose a system design for protecting on-device LLMs using Arm Trusted Execution Environment (TEE), TrustZone. Our system addresses two primary challenges: (1) The dilemma between memory efficiency and fast inference (caching model parameters within TEE memory). (2) The lack of efficient and secure Neural Processing Unit (NPU) time-sharing between Rich Execution Environment (REE) and TEE.
Our approach incorporates two key innovations. First, we employ pipelined restoration, leveraging the deterministic memory access patterns of LLM inference to prefetch parameters on demand, hiding memory allocation, I/O and decryption latency under computation time. Second, we introduce a co-driver design, creating a minimal data plane NPU driver in the TEE that collaborates with the full-fledged REE driver. This reduces the TEE TCB size and eliminates control plane reinitialization overhead during NPU world switches.
We implemented our system on the emerging OpenHarmony OS and the llama.cpp inference framework, and evaluated it with various LLMs on an Arm Rockchip device. Compared to a strawman TEE baseline lacking our optimizations, our system reduces TTFT by up to 90.9% and increases decoding speed by up to 23.2%.