🤖 AI Summary
This work addresses the high computational cost of cross-encoders in re-ranking, which limits their applicability to large-scale retrieval despite their superior performance. By analyzing the internal interaction mechanisms of cross-encoders, the authors identify and eliminate redundant or detrimental interactions, thereby proposing a lightweight minimal-interaction architecture that uniquely combines the strengths of cross-encoders and late-interaction models. The resulting model maintains high accuracy while significantly improving inference efficiency and out-of-domain generalization: it nearly matches the original cross-encoder’s performance on in-domain tasks, outperforms late-interaction models such as ColBERT on out-of-domain benchmarks, and achieves a fourfold reduction in inference latency.
📝 Abstract
Cross-encoders deliver state-of-the-art ranking effectiveness in information retrieval, but have a high inference cost. This prevents them from being used as first-stage rankers, but also incurs a cost when re-ranking documents. Prior work has addressed this bottleneck from two largely separate directions: accelerating cross-encoder inference by sparsifying the attention process or improving first-stage retrieval effectiveness using more complex models, e.g. late-interaction ones. In this work, we propose to bridge these two approaches, based on an in-depth understanding of the internal mechanisms of cross-encoders. Starting from cross-encoders, we show that it is possible to derive a new late-interaction-like architecture by carefully removing detrimental or unnecessary interactions. We name this architecture MICE (Minimal Interaction Cross-Encoders). We extensively evaluate MICE across both in-domain (ID) and out-of-domain (OOD) datasets. MICE decreases fourfold the inference latency compared to standard cross-encoders, matching late-interaction models like ColBERT while retaining most of cross-encoder ID effectiveness and demonstrating superior generalization abilities in OOD.