🤖 AI Summary
Cross-lingual embedding inversion attacks pose significant privacy risks in multilingual NLP systems. Method: This paper proposes a few-shot cross-lingual embedding inversion attack. Its core innovation is the first formulation of inter-lingual syntactic and lexical similarity as a graph-structured constraint, enabling a language-similarity-aware graph neural optimization framework; it theoretically generalizes existing methods such as ALGEN. To ensure robust cross-lingual embedding alignment with only ten samples per language, we jointly regularize the optimization with Frobenius norm, linear inequality, and total variation constraints. Contribution/Results: Experiments across multiple languages and models demonstrate a 10–20% improvement in ROUGE-L score, substantially enhancing attack transferability. Crucially, the results empirically validate that linguistic similarity is a key determinant of inversion attack transferability.
📝 Abstract
We propose LAGO - Language Similarity-Aware Graph Optimization - a novel approach for few-shot cross-lingual embedding inversion attacks, addressing critical privacy vulnerabilities in multilingual NLP systems. Unlike prior work in embedding inversion attacks that treat languages independently, LAGO explicitly models linguistic relationships through a graph-based constrained distributed optimization framework. By integrating syntactic and lexical similarity as edge constraints, our method enables collaborative parameter learning across related languages. Theoretically, we show this formulation generalizes prior approaches, such as ALGEN, which emerges as a special case when similarity constraints are relaxed. Our framework uniquely combines Frobenius-norm regularization with linear inequality or total variation constraints, ensuring robust alignment of cross-lingual embedding spaces even with extremely limited data (as few as 10 samples per language). Extensive experiments across multiple languages and embedding models demonstrate that LAGO substantially improves the transferability of attacks with 10-20% increase in Rouge-L score over baselines. This work establishes language similarity as a critical factor in inversion attack transferability, urging renewed focus on language-aware privacy-preserving multilingual embeddings.