🤖 AI Summary
Existing remote attestation methods for billion-parameter on-device large language models (LLMs) fail to simultaneously satisfy timeliness, memory efficiency, and scalability to large-scale models.
Method: This paper proposes the first efficient remote attestation framework tailored for on-device LLMs, achieved via algorithm-software-hardware co-design. It embeds a robust watermark into the model’s activation distribution, integrates a lightweight verification protocol inside a trusted execution environment (TEE), and binds cryptographic signatures to fine-grained model components.
Contribution/Results: Evaluated on mainstream architectures—including Llama, Qwen, and Phi—the framework precisely detects model replacement and forgery attacks with high attestation accuracy, incurs zero inference throughput overhead, and imposes minimal computational and memory cost. It thus provides strong, hardware-rooted intellectual property protection for model vendors, establishing a practical foundation for trustworthy on-device LLM deployment.
📝 Abstract
As on-device LLMs(e.g., Apple on-device Intelligence) are widely adopted to reduce network dependency, improve privacy, and enhance responsiveness, verifying the legitimacy of models running on local devices becomes critical. Existing attestation techniques are not suitable for billion-parameter Large Language Models (LLMs), struggling to remain both time- and memory-efficient while addressing emerging threats in the LLM era. In this paper, we present AttestLLM, the first-of-its-kind attestation framework to protect the hardware-level intellectual property (IP) of device vendors by ensuring that only authorized LLMs can execute on target platforms. AttestLLM leverages an algorithm/software/hardware co-design approach to embed robust watermarking signatures onto the activation distributions of LLM building blocks. It also optimizes the attestation protocol within the Trusted Execution Environment (TEE), providing efficient verification without compromising inference throughput. Extensive proof-of-concept evaluations on LLMs from Llama, Qwen, and Phi families for on-device use cases demonstrate AttestLLM's attestation reliability, fidelity, and efficiency. Furthermore, AttestLLM enforces model legitimacy and exhibits resilience against model replacement and forgery attacks.