Angular Dispersion Accelerates $k$-Nearest Neighbors Machine Translation

📅 2025-09-20
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
k-NN MT improves translation quality but suffers from high computational and memory overhead in approximate nearest-neighbor retrieval over large external memories. To address this, we propose a novel acceleration paradigm grounded in data structure performance—rather than reducing memory size or query count, we optimize the angular distribution of neural hidden states via angular dispersion regularization. This encourages more uniform spherical distribution of high-dimensional contextual representations, thereby significantly improving load balancing in tree-based approximate k-NN retrieval structures. The method enhances retrieval tree balance and query efficiency, achieving an average 2.1× speedup in retrieval, reduced latency, and consistent BLEU gains of 0.3–0.6 across multiple translation benchmarks. Our approach offers a capacity-preserving path toward efficient k-NN MT, eliminating trade-offs between memory utilization and inference speed.

Technology Category

Application Category

📝 Abstract
Augmenting neural machine translation with external memory at decoding time, in the form of k-nearest neighbors machine translation ($k$-NN MT), is a well-established strategy for increasing translation performance. $k$-NN MT retrieves a set of tokens that occurred in the most similar contexts recorded in a prepared data store, using hidden state representations of translation contexts as vector lookup keys. One of the main disadvantages of this method is the high computational cost and memory requirements. Since an exhaustive search is not feasible in large data stores, practitioners commonly use approximate $k$-NN MT lookup, yet even such algorithms are a bottleneck. In contrast to research directions seeking to accelerate $k$-NN MT by reducing data store size or the number of lookup calls, we pursue an orthogonal direction based on the performance properties of approximate $k$-NN MT lookup data structures. In particular, we propose to encourage angular dispersion of the neural hidden representations of contexts. We show that improving dispersion leads to better balance in the retrieval data structures, accelerating retrieval and slightly improving translations.
Problem

Research questions and friction points this paper is trying to address.

High computational cost and memory requirements in k-NN machine translation
Bottleneck caused by approximate k-NN lookup algorithms during retrieval
Inefficient retrieval due to imbalanced neural hidden representation structures
Innovation

Methods, ideas, or system contributions that make the work stand out.

Encourages angular dispersion of hidden representations
Improves balance in retrieval data structures
Accelerates approximate k-NN lookup performance
🔎 Similar Papers
No similar papers found.