🤖 AI Summary
Homophone normalization in machine translation for Ge’ez-script languages (e.g., Amharic, Tigrinya) causes loss of linguistic diversity and degrades cross-lingual transfer performance. Method: This work relocates homophone normalization from pre-training preprocessing to post-inference intervention, preserving orthographic variants in monolingual training and cross-lingual transfer setups to prevent degradation of language-specific features in training data. Contribution/Results: We present the first systematic validation of post-inference normalization, demonstrating its feasibility and effectiveness: in multilingual settings, BLEU scores improve by up to 1.03 points while enhancing the model’s capacity to represent diverse orthographic forms. The approach enables more controllable and reversible, technology-driven interventions in language evolution. Our core contribution is a temporal reconfiguration of normalization—decoupling it from training—to simultaneously improve evaluation metrics and preserve linguistic representation fidelity.
📝 Abstract
Homophone normalization, where characters that have the same sound in a writing script are mapped to one character, is a pre-processing step applied in Amharic Natural Language Processing (NLP) literature. While this may improve performance reported by automatic metrics, it also results in models that are not able to understand different forms of writing in a single language. Further, there might be impacts in transfer learning, where models trained on normalized data do not generalize well to other languages. In this paper, we experiment with monolingual training and cross-lingual transfer to understand the impacts of normalization on languages that use the Ge'ez script. We then propose a post-inference intervention in which normalization is applied to model predictions instead of training data. With our simple scheme of post-inference normalization, we show that we can achieve an increase in BLEU score of up to 1.03 while preserving language features in training. Our work contributes to the broader discussion on technology-facilitated language change and calls for more language-aware interventions.