🤖 AI Summary
Existing research on Retrieval-Augmented Code Generation (RACG) is confined to monolingual settings, with the effectiveness and safety of cross-lingual transfer remaining unexplored systematically. Method: We introduce the first multilingual RACG benchmark covering 13 programming languages and 14,000 samples; propose a cross-lingual adversarial data construction method; develop domain-adapted code embedding and retrieval models; and establish a unified evaluation protocol. Contributions/Results: Key findings include: (1) Java significantly outperforms Python in cross-lingual RACG, revealing utility imbalance across languages; (2) certain adversarial perturbations paradoxically improve performance; and (3) domain-specific retrievers substantially surpass general-purpose text retrievers. Experiments demonstrate that multilingual RACG enhances generation quality, provide the first quantitative characterization of robustness disparities between monolingual and cross-lingual settings, and publicly release the benchmark dataset and analytical framework.
📝 Abstract
Current research on large language models (LLMs) with retrieval-augmented code generation (RACG) mainly focuses on single-language settings, leaving cross-lingual effectiveness and security unexplored. Multi-lingual RACG systems are valuable for migrating code-bases across programming languages (PLs), yet face risks from error (e.g. adversarial data corruption) propagation in cross-lingual transfer. We construct a dataset spanning 13 PLs with nearly 14k instances to explore utility and robustness of multi-lingual RACG systems. Our investigation reveals four key insights: (1) Effectiveness: multi-lingual RACG significantly enhances multi-lingual code LLMs generation; (2) Inequality: Java demonstrate superior cross-lingual utility over Python in RACG; (3) Robustness: Adversarial attacks degrade performance significantly in mono-lingual RACG but show mitigated impacts in cross-lingual scenarios; Counterintuitively, perturbed code may improve RACG in cross-lingual scenarios; (4) Specialization: Domain-specific code retrievers outperform significantly general text retrievers. These findings establish foundation for developing effective and secure multi-lingual code assistants.