Generative Representational Instruction Tuning

📅 2024-02-15
🏛️ arXiv.org
📈 Citations: 78
Influential: 20
📄 PDF
🤖 AI Summary
Current large language models exhibit a performance trade-off between text generation and embedding representation tasks, struggling to excel at both simultaneously. Method: This paper introduces Generative Representation Instruction Tuning (GRIT), a novel paradigm that unifies generation and embedding modeling within a single model architecture. GRIT explicitly distinguishes task types via instruction prompts and integrates multi-task joint fine-tuning with architectural adaptations to eliminate the performance compromise. Contribution/Results: The proposed GritLM model family sets a new open-source benchmark record on MTEB (7B parameter scale) and its 8×7B ensemble comprehensively outperforms comparably sized open-source generative models. Moreover, in RAG-based long-document retrieval, GritLM achieves over 60% speedup, empirically validating the synergistic enhancement of generative and representational capabilities.

Technology Category

Application Category

📝 Abstract
All text-based language problems can be reduced to either generation or embedding. Current models only perform well at one or the other. We introduce generative representational instruction tuning (GRIT) whereby a large language model is trained to handle both generative and embedding tasks by distinguishing between them through instructions. Compared to other open models, our resulting GritLM 7B sets a new state of the art on the Massive Text Embedding Benchmark (MTEB) and outperforms all models up to its size on a range of generative tasks. By scaling up further, GritLM 8x7B outperforms all open generative language models that we tried while still being among the best embedding models. Notably, we find that GRIT matches training on only generative or embedding data, thus we can unify both at no performance loss. Among other benefits, the unification via GRIT speeds up Retrieval-Augmented Generation (RAG) by>60% for long documents, by no longer requiring separate retrieval and generation models. Models, code, etc. are freely available at https://github.com/ContextualAI/gritlm.
Problem

Research questions and friction points this paper is trying to address.

Unifies generative and embedding tasks in language models.
Improves performance on both MTEB and generative benchmarks.
Enhances Retrieval-Augmented Generation efficiency by over 60%.
Innovation

Methods, ideas, or system contributions that make the work stand out.

GRIT unifies generative and embedding tasks.
GritLM outperforms in MTEB and generative benchmarks.
Unified model speeds up RAG by 60%.
🔎 Similar Papers
No similar papers found.