Beyond instruction-conditioning, MoTE: Mixture of Task Experts for Multi-task Embedding Models

📅 2025-06-21
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the representation bottleneck faced by low-capacity models in instruction-conditioned embedding specialization, this paper proposes the Mixture of Task Experts (MoTE) framework, which integrates a sparse expert architecture with task-aware contrastive learning (TACL) to enable efficient, inference-cost-free multi-task embedding specialization. MoTE overcomes the expressivity limits of instruction tuning through task-specific parameterization and fine-grained task discrimination—without increasing model size, training data volume, or inference overhead. On standard retrieval benchmarks, MoTE achieves an average performance gain of +0.79 (43% relative improvement) over baselines (+1.81 → +2.60), with up to +1.94 (64% relative improvement) on individual tasks (+3.27 → +5.21). These results substantially enhance the practicality of small models in retrieval-augmented generation (RAG) and representation learning.

Technology Category

Application Category

📝 Abstract
Dense embeddings are fundamental to modern machine learning systems, powering Retrieval-Augmented Generation (RAG), information retrieval, and representation learning. While instruction-conditioning has become the dominant approach for embedding specialization, its direct application to low-capacity models imposes fundamental representational constraints that limit the performance gains derived from specialization. In this paper, we analyze these limitations and introduce the Mixture of Task Experts (MoTE) transformer block, which leverages task-specialized parameters trained with Task-Aware Contrastive Learning ( acl) to enhance the model ability to generate specialized embeddings. Empirical results show that MoTE achieves $64%$ higher performance gains in retrieval datasets ($+3.27 ightarrow +5.21$) and $43%$ higher performance gains across all datasets ($+1.81 ightarrow +2.60$). Critically, these gains are achieved without altering instructions, training data, inference time, or number of active parameters.
Problem

Research questions and friction points this paper is trying to address.

Enhancing embedding specialization in low-capacity models
Overcoming limitations of instruction-conditioning approach
Improving performance gains without altering resources
Innovation

Methods, ideas, or system contributions that make the work stand out.

Mixture of Task Experts (MoTE) transformer block
Task-Aware Contrastive Learning (TACL)
Enhances specialized embeddings without altering instructions
🔎 Similar Papers
No similar papers found.