Rep-MTL: Unleashing the Power of Representation-level Task Saliency for Multi-Task Learning

📅 2025-07-28
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing multi-task learning (MTL) optimization methods primarily operate at the gradient level—via loss scaling and conflict mitigation—yet yield inconsistent performance improvements and overlook the latent task interactions within the shared representation space. Method: We introduce “representation-level task saliency,” a novel concept that explicitly models and regulates task dependencies in shared representations. Our approach employs entropy regularization to suppress negative transfer and sample-wise cross-task alignment to foster complementary knowledge sharing—all within a standard training framework and without complex loss weighting. Contribution/Results: The method achieves competitive performance on four mainstream MTL benchmarks, maintaining efficiency even under uniform loss weighting. Power-law analysis confirms its balanced trade-off between task specialization and representation sharing. By shifting optimization from gradient manipulation to representation-level control, our work breaks from conventional gradient-centric paradigms and establishes a more principled foundation for MTL.

Technology Category

Application Category

📝 Abstract
Despite the promise of Multi-Task Learning in leveraging complementary knowledge across tasks, existing multi-task optimization (MTO) techniques remain fixated on resolving conflicts via optimizer-centric loss scaling and gradient manipulation strategies, yet fail to deliver consistent gains. In this paper, we argue that the shared representation space, where task interactions naturally occur, offers rich information and potential for operations complementary to existing optimizers, especially for facilitating the inter-task complementarity, which is rarely explored in MTO. This intuition leads to Rep-MTL, which exploits the representation-level task saliency to quantify interactions between task-specific optimization and shared representation learning. By steering these saliencies through entropy-based penalization and sample-wise cross-task alignment, Rep-MTL aims to mitigate negative transfer by maintaining the effective training of individual tasks instead pure conflict-solving, while explicitly promoting complementary information sharing. Experiments are conducted on four challenging MTL benchmarks covering both task-shift and domain-shift scenarios. The results show that Rep-MTL, even paired with the basic equal weighting policy, achieves competitive performance gains with favorable efficiency. Beyond standard performance metrics, Power Law exponent analysis demonstrates Rep-MTL's efficacy in balancing task-specific learning and cross-task sharing. The project page is available at HERE.
Problem

Research questions and friction points this paper is trying to address.

Resolve conflicts in multi-task learning via representation-level saliency
Enhance inter-task complementarity beyond optimizer-centric strategies
Mitigate negative transfer while promoting effective task-specific training
Innovation

Methods, ideas, or system contributions that make the work stand out.

Exploits representation-level task saliency
Uses entropy-based penalization for optimization
Aligns sample-wise cross-task for sharing
🔎 Similar Papers
No similar papers found.
Zedong Wang
Zedong Wang
The Hong Kong University of Science and Technology (HKUST)
Deep LearningComputer VisionMulti-task Learning
S
Siyuan Li
Zhejiang University
D
Dan Xu
The Hong Kong University of Science and Technology