🤖 AI Summary
In ColBERT, the Chamfer distance neglects query-term importance, limiting fine-grained matching capability. Method: We propose Importance-Weighted Chamfer Distance (IWCD), which introduces learnable, term-level importance weights solely for query tokens—while preserving precomputed document vectors—by weighting the maximum similarity scores between each query token and all document tokens. These weights are initialized with IDF and optimized via few-shot fine-tuning, requiring no modification to existing vector representations or document encoders. Contribution/Results: IWCD significantly enhances semantic alignment in multi-vector retrieval. Under the BEIR zero-shot setting, it improves Recall@10 by +1.28% on average; with few-shot fine-tuning, the gain rises to +3.66%. This demonstrates the method’s effectiveness, efficiency, and strong generalization across diverse retrieval tasks.
📝 Abstract
ColBERT introduced a late interaction mechanism that independently encodes queries and documents using BERT, and computes similarity via fine-grained interactions over token-level vector representations. This design enables expressive matching while allowing efficient computation of scores, as the multi-vector document representations could be pre-computed offline. ColBERT models distance using a Chamfer-style function: for each query token, it selects the closest document token and sums these distances across all query tokens.
In our work, we explore enhancements to the Chamfer distance function by computing a weighted sum over query token contributions, where weights reflect the token importance. Empirically, we show that this simple extension, requiring only token-weight training while keeping the multi-vector representations fixed, further enhances the expressiveness of late interaction multi-vector mechanism. In particular, on the BEIR benchmark, our method achieves an average improvement of 1.28% in Recall@10 in the zero-shot setting using IDF-based weights, and 3.66% through few-shot fine-tuning.