🤖 AI Summary
This study addresses the lack of systematic evaluation of large language models’ effectiveness in generating counterfactual examples in non-English contexts. By comparing direct generation against English-translation-based strategies across six languages, and integrating automatic metrics, a fine-grained error taxonomy, and data augmentation techniques, the work reveals common cross-lingual patterns in counterfactual editing and identifies four prevalent error types. The findings show that translation-based counterfactuals exhibit larger modifications yet higher validity. Moreover, multilingual counterfactual data augmentation substantially outperforms cross-lingual transfer approaches, particularly yielding notable gains for low-resource languages. However, the quality of generated counterfactuals remains a critical bottleneck limiting further improvements in model robustness.
📝 Abstract
Counterfactuals refer to minimally edited inputs that cause a model's prediction to change, serving as a promising approach to explaining the model's behavior. Large language models (LLMs) excel at generating English counterfactuals and demonstrate multilingual proficiency. However, their effectiveness in generating multilingual counterfactuals remains unclear. To this end, we conduct a comprehensive study on multilingual counterfactuals. We first conduct automatic evaluations on both directly generated counterfactuals in the target languages and those derived via English translation across six languages. Although translation-based counterfactuals offer higher validity than their directly generated counterparts, they demand substantially more modifications and still fall short of matching the quality of the original English counterfactuals. Second, we find the patterns of edits applied to high-resource European-language counterfactuals to be remarkably similar, suggesting that cross-lingual perturbations follow common strategic principles. Third, we identify and categorize four main types of errors that consistently appear in the generated counterfactuals across languages. Finally, we reveal that multilingual counterfactual data augmentation (CDA) yields larger model performance improvements than cross-lingual CDA, especially for lower-resource languages. Yet, the imperfections of the generated counterfactuals limit gains in model performance and robustness.