Re-Key-Free, Risky-Free: Adaptable Model Usage Control

📅 2025-11-24
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the failure of intellectual property (IP) protection for deep neural networks (DNNs) under continual fine-tuning and task adaptation, this paper proposes ADALOC—the first model usage control framework supporting keyless re-encryption and adaptive authorization. Its core innovation lies in dynamically designating a subset of model weights as intrinsic access keys, enforcing all parameter updates exclusively on this subset while theoretically bounding the induced functional degradation to preserve model integrity. Authorized users retain seamless, uninterrupted access without requiring key redistribution. Evaluated on benchmarks including CIFAR-100, ADALOC maintains high authorized accuracy (>85%), while unauthorized access degrades performance to near-random levels (1.01%), significantly outperforming existing static-key-based approaches.

Technology Category

Application Category

📝 Abstract
Deep neural networks (DNNs) have become valuable intellectual property of model owners, due to the substantial resources required for their development. To protect these assets in the deployed environment, recent research has proposed model usage control mechanisms to ensure models cannot be used without proper authorization. These methods typically lock the utility of the model by embedding an access key into its parameters. However, they often assume static deployment, and largely fail to withstand continual post-deployment model updates, such as fine-tuning or task-specific adaptation. In this paper, we propose ADALOC, to endow key-based model usage control with adaptability during model evolution. It strategically selects a subset of weights as an intrinsic access key, which enables all model updates to be confined to this key throughout the evolution lifecycle. ADALOC enables using the access key to restore the keyed model to the latest authorized states without redistributing the entire network (i.e., adaptation), and frees the model owner from full re-keying after each model update (i.e., lock preservation). We establish a formal foundation to underpin ADALOC, providing crucial bounds such as the errors introduced by updates restricted to the access key. Experiments on standard benchmarks, such as CIFAR-100, Caltech-256, and Flowers-102, and modern architectures, including ResNet, DenseNet, and ConvNeXt, demonstrate that ADALOC achieves high accuracy under significant updates while retaining robust protections. Specifically, authorized usages consistently achieve strong task-specific performance, while unauthorized usage accuracy drops to near-random guessing levels (e.g., 1.01% on CIFAR-100), compared to up to 87.01% without ADALOC. This shows that ADALOC can offer a practical solution for adaptive and protected DNN deployment in evolving real-world scenarios.
Problem

Research questions and friction points this paper is trying to address.

Protecting DNN intellectual property against unauthorized usage
Enabling model usage control during continual post-deployment updates
Maintaining protection without full model redistribution after updates
Innovation

Methods, ideas, or system contributions that make the work stand out.

Uses intrinsic access key in model weights
Confines all updates to the key subset
Enables adaptation without full model redistribution
🔎 Similar Papers
No similar papers found.
Z
Zihan Wang
The University of Queensland, Australia
Zhongkui Ma
Zhongkui Ma
PhD student at the University of Queensland
Neural Network VerificationReachability AnalysisTrustworthy AIAI Safety
Xinguo Feng
Xinguo Feng
The University of Queensland
C
Chuan Yan
The University of Queensland, Australia
Dongge Liu
Dongge Liu
Google
CybersecurityMachine Learning
R
Ruoxi Sun
CSIRO’s Data61, Australia
D
Derui Wang
CSIRO’s Data61, Australia
M
Minhui Xue
CSIRO’s Data61, Australia
Guangdong Bai
Guangdong Bai
Associate Professor of The University of Queensland
System SecuritySoftware SecurityTrustworthy AIPrivacy Compliance