🤖 AI Summary
Pretrained models face extraction attacks wherein adversaries exploit domain-invariant features to infer the source domain and enable unauthorized cross-domain transfer. To mitigate this, we propose a targeted weight-space manipulation method that specifically safeguards domain-invariant features. Our approach introduces a novel bilevel optimization framework: the upper level quantifies filter-level transferability to identify sensitive parameters, while the lower level jointly leverages insecure-memory perturbation and Trusted Execution Environment (TEE)-enforced secure execution to protect model weights. The method preserves model performance for authorized downstream tasks while reducing source-domain identification accuracy to near-random levels and degrading cross-domain transfer capability by 74.65%. This is the first work to synergistically integrate transferability quantification, memory perturbation, and TEE-based protection for copyright enforcement in pretrained models, significantly enhancing robustness against model extraction.
📝 Abstract
Pre-trained models are valuable intellectual property, capturing both domain-specific and domain-invariant features within their weight spaces. However, model extraction attacks threaten these assets by enabling unauthorized source-domain inference and facilitating cross-domain transfer via the exploitation of domain-invariant features. In this work, we introduce **ProDiF**, a novel framework that leverages targeted weight space manipulation to secure pre-trained models against extraction attacks. **ProDiF** quantifies the transferability of filters and perturbs the weights of critical filters in unsecured memory, while preserving actual critical weights in a Trusted Execution Environment (TEE) for authorized users. A bi-level optimization further ensures resilience against adaptive fine-tuning attacks. Experimental results show that **ProDiF** reduces source-domain accuracy to near-random levels and decreases cross-domain transferability by 74.65%, providing robust protection for pre-trained models. This work offers comprehensive protection for pre-trained DNN models and highlights the potential of weight space manipulation as a novel approach to model security.