Toward Robust Non-Transferable Learning: A Survey and Benchmark

📅 2025-02-19
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses ethical and security risks arising from unintended generalization of deep models on unauthorized or harmful data in non-transferable learning (NTL). We systematically expose the robustness deficiencies of existing NTL methods under adversarial attacks. First, we propose the first unified formalization of the NTL task. Second, we introduce NTLBench—the first open-source evaluation benchmark for NTL—covering mainstream techniques including data isolation, gradient masking, and feature disentanglement. Third, we design a multi-dimensional robustness evaluation protocol, establishing standardized assessment criteria and a vulnerability analysis framework. Experimental results demonstrate widespread failure in generalization control across current NTL methods. This work delivers the first comprehensive survey of NTL, a reproducible evaluation infrastructure, and theoretical tools to support trustworthy AI deployment.

Technology Category

Application Category

📝 Abstract
Over the past decades, researchers have primarily focused on improving the generalization abilities of models, with limited attention given to regulating such generalization. However, the ability of models to generalize to unintended data (e.g., harmful or unauthorized data) can be exploited by malicious adversaries in unforeseen ways, potentially resulting in violations of model ethics. Non-transferable learning (NTL), a task aimed at reshaping the generalization abilities of deep learning models, was proposed to address these challenges. While numerous methods have been proposed in this field, a comprehensive review of existing progress and a thorough analysis of current limitations remain lacking. In this paper, we bridge this gap by presenting the first comprehensive survey on NTL and introducing NTLBench, the first benchmark to evaluate NTL performance and robustness within a unified framework. Specifically, we first introduce the task settings, general framework, and criteria of NTL, followed by a summary of NTL approaches. Furthermore, we emphasize the often-overlooked issue of robustness against various attacks that can destroy the non-transferable mechanism established by NTL. Experiments conducted via NTLBench verify the limitations of existing NTL methods in robustness. Finally, we discuss the practical applications of NTL, along with its future directions and associated challenges.
Problem

Research questions and friction points this paper is trying to address.

Regulate deep learning model generalization abilities
Address robustness against various attacks in NTL
Provide comprehensive survey and benchmark for NTL
Innovation

Methods, ideas, or system contributions that make the work stand out.

Non-transferable learning framework
Robustness evaluation benchmark
Deep learning generalization regulation
🔎 Similar Papers
No similar papers found.