🤖 AI Summary
To address the degraded cross-task and cross-domain generalization of large language models (LLMs) in zero-shot information retrieval—caused by lexical and distributional shifts—this paper introduces Task Arithmetic to zero-shot retrieval for the first time. We propose a fine-tuning-free, weight-arithmetic adaptation method that combines instruction-tuned LLM weights via addition and subtraction to enable multi-task and multi-domain knowledge fusion, yielding plug-and-play zero-shot re-ranking. Evaluated on scientific, biomedical, and multilingual retrieval benchmarks, our approach achieves up to +18% improvement in NDCG@10 and +15% in P@10 over state-of-the-art zero-shot re-rankers. This work establishes a new paradigm for lightweight, transferable, LLM-driven retrieval—requiring no parameter updates or task-specific training—while demonstrating strong zero-shot adaptability across heterogeneous domains and tasks.
📝 Abstract
Large Language Models (LLMs) have shown impressive zero-shot performance across a variety of Natural Language Processing tasks, including document re-ranking. However, their effectiveness degrades on unseen tasks and domains, largely due to shifts in vocabulary and word distributions. In this paper, we investigate Task Arithmetic, a technique that combines the weights of LLMs pre-trained on different tasks or domains via simple mathematical operations, such as addition or subtraction, to adapt retrieval models without requiring additional fine-tuning. Our method is able to synthesize diverse tasks and domain knowledge into a single model, enabling effective zero-shot adaptation in different retrieval contexts. Extensive experiments on publicly available scientific, biomedical, and multilingual datasets show that our method improves state-of-the-art re-ranking performance by up to 18% in NDCG@10 and 15% in P@10. In addition to these empirical gains, our analysis provides insights into the strengths and limitations of Task Arithmetic as a practical strategy for zero-shot learning and model adaptation. We make our code publicly available at https://github.com/DetectiveMB/Task-Arithmetic-for-ZS-IR.