CRISP: Clustering Multi-Vector Representations for Denoising and Pruning

πŸ“… 2025-05-16
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
Multi-vector retrieval models (e.g., ColBERT) suffer from high storage and computational overhead due to high-dimensional token embeddings, hindering practical deployment. Method: This paper proposes an end-to-end learnable structured pruning framework for multi-vector models. It uniquely integrates a differentiable clustering objective directly into the training process, jointly optimizing clustering quality, retrieval effectiveness, and robustness via token-level embedding regularization. The approach requires no post-hoc processing and leverages intrinsic structural priors to guide representation learning, enabling vector denoising and compact modeling. Contribution/Results: Evaluated on the BEIR benchmark, the pruned model achieves 3Γ— compression while outperforming the original ColBERT in retrieval accuracy. At an aggressive 11Γ— compression ratio, it incurs only a 3.6% drop in NDCG@10, demonstrating that learned clustering effectively preserves semantic fidelity while enabling efficient denoising and compression.

Technology Category

Application Category

πŸ“ Abstract
Multi-vector models, such as ColBERT, are a significant advancement in neural information retrieval (IR), delivering state-of-the-art performance by representing queries and documents by multiple contextualized token-level embeddings. However, this increased representation size introduces considerable storage and computational overheads which have hindered widespread adoption in practice. A common approach to mitigate this overhead is to cluster the model's frozen vectors, but this strategy's effectiveness is fundamentally limited by the intrinsic clusterability of these embeddings. In this work, we introduce CRISP (Clustered Representations with Intrinsic Structure Pruning), a novel multi-vector training method which learns inherently clusterable representations directly within the end-to-end training process. By integrating clustering into the training phase rather than imposing it post-hoc, CRISP significantly outperforms post-hoc clustering at all representation sizes, as well as other token pruning methods. On the BEIR retrieval benchmarks, CRISP achieves a significant rate of ~3x reduction in the number of vectors while outperforming the original unpruned model. This indicates that learned clustering effectively denoises the model by filtering irrelevant information, thereby generating more robust multi-vector representations. With more aggressive clustering, CRISP achieves an 11x reduction in the number of vectors with only a $3.6%$ quality loss.
Problem

Research questions and friction points this paper is trying to address.

Reducing storage and computational overheads in multi-vector IR models
Improving clusterability of embeddings during training, not post-hoc
Achieving significant vector reduction with minimal quality loss
Innovation

Methods, ideas, or system contributions that make the work stand out.

Learns clusterable representations during training
Integrates clustering into end-to-end training
Reduces vectors significantly with minimal quality loss