🤖 AI Summary
This paper addresses the challenge of simultaneously mitigating representation interference and ensuring parameter efficiency in multi-task fine-tuning of vision Transformers. We propose a neural diffeomorphic transformation method based on singular value decomposition (SVD): the left and right singular vectors of pretrained weights are frozen, while only the singular values are dynamically modulated via a learnable diffeomorphic function. To our knowledge, this is the first work to introduce diffeomorphisms into multi-task adaptation, enabling full-rank updates with theoretical guarantees that strictly preserve the geometric structure of pretrained features—thereby overcoming task competition bottlenecks imposed by low-rank constraints. The resulting parameter-efficient multi-task framework achieves state-of-the-art performance on four dense prediction tasks from PASCAL MTL and NYUD, while reducing parameter count by 75% compared to existing methods.
📝 Abstract
Pre-trained Vision Transformers now serve as powerful tools for computer vision. Yet, efficiently adapting them for multiple tasks remains a challenge that arises from the need to modify the rich hidden representations encoded by the learned weight matrices, without inducing interference between tasks. Current parameter-efficient methods like LoRA, which apply low-rank updates, force tasks to compete within constrained subspaces, ultimately degrading performance. We introduce DiTASK a novel Diffeomorphic Multi-Task Fine-Tuning approach that maintains pre-trained representations by preserving weight matrix singular vectors, while enabling task-specific adaptations through neural diffeomorphic transformations of the singular values. By following this approach, DiTASK enables both shared and task-specific feature modulations with minimal added parameters. Our theoretical analysis shows that DITASK achieves full-rank updates during optimization, preserving the geometric structure of pre-trained features, and establishing a new paradigm for efficient multi-task learning (MTL). Our experiments on PASCAL MTL and NYUD show that DiTASK achieves state-of-the-art performance across four dense prediction tasks, using 75% fewer parameters than existing methods.