🤖 AI Summary
Current large language models (LLMs) support only hundreds of the world’s ~7,000 languages, and dictionary-based prompting—though effective for low-resource language translation—incurs prohibitive token overhead when applied indiscriminately across full dictionaries.
Method: We propose *Automatic Dictionary Selection* (ADS), a novel task enabling dynamic, training-free, fine-tuning-free dictionary pruning: leveraging publicly available monolingual corpora, we estimate target-language word frequencies and retain only dictionary entries corresponding to low-frequency words for dictionary-based prompting.
Contribution/Results: Evaluated zero-shot on ChatGPT, Llama, and DeepSeek across FLORES-100 language pairs, ADS reduces average token consumption by 38% while improving translation quality over the full-dictionary baseline in 67% of languages—demonstrating simultaneous gains in efficiency and performance without model modification.
📝 Abstract
There are more than 7,000 languages around the world, and current Large Language Models (LLMs) only support hundreds of languages. Dictionary-based prompting methods can enhance translation on them, but most methods use all the available dictionaries, which could be expensive. Instead, it will be flexible to have a trade-off between token consumption and translation performance. This paper proposes a novel task called extbf{A}utomatic extbf{D}ictionary extbf{S}election ( extbf{ADS}). The goal of the task is to automatically select which dictionary to use to enhance translation. We propose a novel and effective method which we call extbf{S}elect extbf{Lo}w-frequency extbf{W}ords! ( extbf{SLoW}) which selects those dictionaries that have a lower frequency. Our methods have unique advantages. First, there is no need for access to the training data for frequency estimation (which is usually unavailable). Second, it inherits the advantage of dictionary-based methods, where no additional tuning is required on LLMs. Experimental results on 100 languages from FLORES indicate that SLoW surpasses strong baselines, and it can obviously save token usage, with many languages even surpassing the translation performance of the full dictionary baseline.footnote{A shocking fact is that there is no need to use the actual training data (often unobtainable) for frequency estimation, and an estimation frequency obtained using public resources is still apparently effective in improving translation with ChatGPT and Llama, and DeepSeek.}footnote{Code and data available upon publication.}