π€ AI Summary
Multi-vector retrieval models (e.g., ColBERT) suffer from high storage and computational overhead due to high-dimensional token embeddings, hindering practical deployment.
Method: This paper proposes an end-to-end learnable structured pruning framework for multi-vector models. It uniquely integrates a differentiable clustering objective directly into the training process, jointly optimizing clustering quality, retrieval effectiveness, and robustness via token-level embedding regularization. The approach requires no post-hoc processing and leverages intrinsic structural priors to guide representation learning, enabling vector denoising and compact modeling.
Contribution/Results: Evaluated on the BEIR benchmark, the pruned model achieves 3Γ compression while outperforming the original ColBERT in retrieval accuracy. At an aggressive 11Γ compression ratio, it incurs only a 3.6% drop in NDCG@10, demonstrating that learned clustering effectively preserves semantic fidelity while enabling efficient denoising and compression.
π Abstract
Multi-vector models, such as ColBERT, are a significant advancement in neural information retrieval (IR), delivering state-of-the-art performance by representing queries and documents by multiple contextualized token-level embeddings. However, this increased representation size introduces considerable storage and computational overheads which have hindered widespread adoption in practice. A common approach to mitigate this overhead is to cluster the model's frozen vectors, but this strategy's effectiveness is fundamentally limited by the intrinsic clusterability of these embeddings. In this work, we introduce CRISP (Clustered Representations with Intrinsic Structure Pruning), a novel multi-vector training method which learns inherently clusterable representations directly within the end-to-end training process. By integrating clustering into the training phase rather than imposing it post-hoc, CRISP significantly outperforms post-hoc clustering at all representation sizes, as well as other token pruning methods. On the BEIR retrieval benchmarks, CRISP achieves a significant rate of ~3x reduction in the number of vectors while outperforming the original unpruned model. This indicates that learned clustering effectively denoises the model by filtering irrelevant information, thereby generating more robust multi-vector representations. With more aggressive clustering, CRISP achieves an 11x reduction in the number of vectors with only a $3.6%$ quality loss.