🤖 AI Summary
This work addresses the vulnerability of large language model (LLM) weights to theft, highlighting that their high compressibility significantly exacerbates leakage risks. For the first time, it directly links model compressibility to weight extraction in adversarial settings and proposes an aggressive compression method tailored for attack scenarios. By relaxing decompression constraints, this approach achieves compression ratios of 16–100×, reducing illicit transmission time from months to days. To counter this threat, the study introduces a low-cost, efficient forensic watermarking mechanism integrated with model obfuscation and hidden localization techniques, forming three complementary defense strategies. Experimental results demonstrate that the proposed watermarking scheme substantially enhances traceability while preserving model utility, offering a novel pathway for protecting intellectual property in LLMs.
📝 Abstract
As frontier AIs become more powerful and costly to develop, adversaries have increasing incentives to steal model weights by mounting exfiltration attacks. In this work, we consider exfiltration attacks where an adversary attempts to sneak model weights out of a datacenter over a network. While exfiltration attacks are multi-step cyber attacks, we demonstrate that a single factor, the compressibility of model weights, significantly heightens exfiltration risk for large language models (LLMs). We tailor compression specifically for exfiltration by relaxing decompression constraints and demonstrate that attackers could achieve 16x to 100x compression with minimal trade-offs, reducing the time it would take for an attacker to illicitly transmit model weights from the defender's server from months to days. Finally, we study defenses designed to reduce exfiltration risk in three distinct ways: making models harder to compress, making them harder to'find,'and tracking provenance for post-attack analysis using forensic watermarks. While all defenses are promising, the forensic watermark defense is both effective and cheap, and therefore is a particularly attractive lever for mitigating weight-exfiltration risk.