MTL-UE: Learning to Learn Nothing for Multi-Task Learning

πŸ“… 2025-05-08
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
Existing unlearnable example (UE) methods target single-task learning (STL) and fail to address emerging threats in multi-task learning (MTL), where adversaries exploit multi-task data to train universal or foundation models. This paper introduces MTL-UEβ€”the first generative UE framework explicitly designed for MTL. Its core innovations include: (i) incorporating label priors and class-level feature embeddings; (ii) designing inter-task and intra-task embedding regularization to enhance robustness against cross-task adversarial perturbations; and (iii) natively supporting dense prediction tasks with plug-and-play compatibility. Extensive experiments across four MTL benchmarks, three UE baselines, five model backbones, and five task-weighting strategies demonstrate consistent and significant improvements over prior approaches, validating MTL-UE’s effectiveness and generalizability.

Technology Category

Application Category

πŸ“ Abstract
Most existing unlearnable strategies focus on preventing unauthorized users from training single-task learning (STL) models with personal data. Nevertheless, the paradigm has recently shifted towards multi-task data and multi-task learning (MTL), targeting generalist and foundation models that can handle multiple tasks simultaneously. Despite their growing importance, MTL data and models have been largely neglected while pursuing unlearnable strategies. This paper presents MTL-UE, the first unified framework for generating unlearnable examples for multi-task data and MTL models. Instead of optimizing perturbations for each sample, we design a generator-based structure that introduces label priors and class-wise feature embeddings which leads to much better attacking performance. In addition, MTL-UE incorporates intra-task and inter-task embedding regularization to increase inter-class separation and suppress intra-class variance which enhances the attack robustness greatly. Furthermore, MTL-UE is versatile with good supports for dense prediction tasks in MTL. It is also plug-and-play allowing integrating existing surrogate-dependent unlearnable methods with little adaptation. Extensive experiments show that MTL-UE achieves superior attacking performance consistently across 4 MTL datasets, 3 base UE methods, 5 model backbones, and 5 MTL task-weighting strategies.
Problem

Research questions and friction points this paper is trying to address.

Generating unlearnable examples for multi-task learning models
Enhancing attack robustness via intra-task and inter-task regularization
Supporting dense prediction tasks in multi-task learning frameworks
Innovation

Methods, ideas, or system contributions that make the work stand out.

Generator-based structure with label priors
Intra-task and inter-task embedding regularization
Plug-and-play support for existing UE methods
πŸ”Ž Similar Papers
Y
Yi Yu
Rapid-Rich Object Search Lab, Interdisciplinary Graduate Programme, Nanyang Technological University, Singapore
Song Xia
Song Xia
NTU
Machine Learning
Siyuan Yang
Siyuan Yang
Wallenberg-NTU Presidential Postdoctoral Fellowship, Nanyang Technological University
Computer VisionAction Recognition
C
Chenqi Kong
School of Electrical and Electronic Engineering, Nanyang Technological University, Singapore
Wenhan Yang
Wenhan Yang
P.hD. student of Computer Science, University of California, Los Angeles
Self-supervised LearningModel Robustness
Shijian Lu
Shijian Lu
College of Computing and Data Science, NTU
Image and video analyticscomputer visionmachine learning
Y
Yap-Peng Tan
School of Electrical and Electronic Engineering, Nanyang Technological University, Singapore
A
A.C. Kot
School of Electrical and Electronic Engineering, Nanyang Technological University, Singapore