[1] Language-Specific Latent Process Hinders Cross-Lingual Performance
[2] Mufu: Multilingual Fused Learning for Low-Resource Translation with LLM
[3] Simpson’s Paradox and the Accuracy-Fluency Tradeoff in Translation
[4] A Computational Approach to Identifying Cultural Keywords Across Languages
[5] Predicting Human Translation Difficulty with Neural Machine Translation
[6] Predicting Human Translation Difficulty Using Automatic Word Alignment
Research Experience
In 2024, a student researcher at Google Research Australia. Mainly working on computational methods for difficult (i.e., cross-cultural or low-resource) translation, multilinguality, psycholinguistics, and interpretability.
Education
PhD student in NLP at the University of Melbourne, advised by Ekaterina Vylomova, Charles Kemp, and Trevor Cohn
Background
Research interests include understanding how model architecture, training data, and algorithms impose learning biases on language models, and their limitations in representing cognitively driven language phenomena; studying how large-scale models navigate varying (sometimes conflicting) goals, and developing controllable mechanisms that drive convergent or adaptable behavior across time, domains, languages, and modalities.