🤖 AI Summary
Existing methods for jointly learning invariant and equivariant representations typically employ independent projection heads, neglecting potential shared information between these two representation types—leading to feature redundancy and inefficient utilization of model capacity. To address this, we propose the Soft Task-Aware Routing (STAR) mechanism, which dynamically allocates shared versus task-specific features via learnable, specialized projection heads. STAR explicitly decouples invariant and equivariant embeddings, reducing their redundancy while enhancing representational diversity. Our approach integrates multi-task representation learning, expert routing, and singular value decomposition–based correlation analysis. Evaluated on multiple transfer learning benchmarks, STAR consistently achieves significant performance gains. Experiments demonstrate that STAR effectively mitigates feature redundancy, improves representation efficiency and generalization, and establishes a novel paradigm for joint invariant-equivariant representation learning.
📝 Abstract
Equivariant representation learning aims to capture variations induced by input transformations in the representation space, whereas invariant representation learning encodes semantic information by disregarding such transformations. Recent studies have shown that jointly learning both types of representations is often beneficial for downstream tasks, typically by employing separate projection heads. However, this design overlooks information shared between invariant and equivariant learning, which leads to redundant feature learning and inefficient use of model capacity. To address this, we introduce Soft Task-Aware Routing (STAR), a routing strategy for projection heads that models them as experts. STAR induces the experts to specialize in capturing either shared or task-specific information, thereby reducing redundant feature learning. We validate this effect by observing lower canonical correlations between invariant and equivariant embeddings. Experimental results show consistent improvements across diverse transfer learning tasks. The code is available at https://github.com/YonseiML/star.