🤖 AI Summary
Large language models (LLMs) struggle to capture complex grammatical rules in extremely low-resource language translation due to insufficient training data and opaque grammar representation. Method: This paper proposes a syntax-driven two-stage translation framework: (1) constructing ZhuangRules, a modular grammar dataset that decouples grammar-book knowledge into rule retrieval and application components; and (2) pioneering the formalization of grammatical rules as executable code functions—leveraging LLMs’ strong structured reasoning over code to jointly enhance rule comprehension and generation. Contribution/Results: The approach alleviates the rule-retrieval bottleneck and achieves an absolute BLEU improvement of 13.1 points on Zhuang translation—a severely low-resource language. It is the first work to empirically validate that codified syntactic representations significantly improve LLMs’ grammatical generalization capability, establishing a new paradigm for controllable, grammar-aware translation in data-scarce scenarios.
📝 Abstract
While large language models (LLMs) have shown promise in translating extremely low-resource languages using resources like dictionaries, the effectiveness of grammar books remains debated. This paper investigates the role of grammar books in translating extremely low-resource languages by decomposing it into two key steps: grammar rule retrieval and application. To facilitate the study, we introduce ZhuangRules, a modularized dataset of grammar rules and their corresponding test sentences. Our analysis reveals that rule retrieval constitutes a primary bottleneck in grammar-based translation. Moreover, although LLMs can apply simple rules for translation when explicitly provided, they encounter difficulties in handling more complex rules. To address these challenges, we propose representing grammar rules as code functions, considering their similarities in structure and the benefit of code in facilitating LLM reasoning. Our experiments show that using code rules significantly boosts both rule retrieval and application, ultimately resulting in a 13.1% BLEU improvement in translation.