Bilingual Text-to-Motion Generation: A New Benchmark and Baselines

📅 2026-03-26
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenges of cross-lingual text-to-motion generation, which are primarily constrained by the scarcity of bilingual data and the limited cross-lingual semantic alignment capabilities of existing language models. To this end, we introduce BiHumanML3D, the first bilingual text-motion benchmark dataset, and propose BiMD, a bilingual motion diffusion model that incorporates an explicit Cross-Lingual Alignment (CLA) mechanism. BiMD enables high-quality motion synthesis driven by bilingual textual inputs and supports zero-shot language transfer. Experimental results on BiHumanML3D demonstrate that BiMD achieves a Fréchet Inception Distance (FID) of 0.045—significantly outperforming baseline methods (0.169)—and a Recall@3 (R@3) of 82.8%, surpassing the best baseline of 80.8%. These results substantially exceed those of both monolingual and translation-based approaches.

Technology Category

Application Category

📝 Abstract
Text-to-motion generation holds significant potential for cross-linguistic applications, yet it is hindered by the lack of bilingual datasets and the poor cross-lingual semantic understanding of existing language models. To address these gaps, we introduce BiHumanML3D, the first bilingual text-to-motion benchmark, constructed via LLM-assisted annotation and rigorous manual correction. Furthermore, we propose a simple yet effective baseline, Bilingual Motion Diffusion (BiMD), featuring Cross-Lingual Alignment (CLA). CLA explicitly aligns semantic representations across languages, creating a robust conditional space that enables high-quality motion generation from bilingual inputs, including zero-shot code-switching scenarios. Extensive experiments demonstrate that BiMD with CLA achieves an FID of 0.045 vs. 0.169 and R@3 of 82.8\% vs. 80.8\%, significantly outperforms monolingual diffusion models and translation baselines on BiHumanML3D, underscoring the critical necessity and reliability of our dataset and the effectiveness of our alignment strategy for cross-lingual motion synthesis. The dataset and code are released at \href{https://wengwanjiang.github.io/BilingualT2M-page}{https://wengwanjiang.github.io/BilingualT2M-page}
Problem

Research questions and friction points this paper is trying to address.

text-to-motion generation
bilingual dataset
cross-lingual semantic understanding
language models
Innovation

Methods, ideas, or system contributions that make the work stand out.

bilingual text-to-motion
cross-lingual alignment
motion diffusion
BiHumanML3D
zero-shot code-switching
🔎 Similar Papers
No similar papers found.
W
Wanjiang Weng
School of Computer Science and Engineering, Southeast University, Nanjing 210096, China; Key Laboratory of New Generation Artificial Intelligence Technology and Its Interdisciplinary Applications (Southeast University), Ministry of Education, China
Xiaofeng Tan
Xiaofeng Tan
Research Intern at Tencent; Master at Southeast University; Dual BSc at Shenzhen Unversity.
AIGCRLHF
X
Xiangbo Shu
Nanjing University of Science and Technology
Guo-Sen Xie
Guo-Sen Xie
Professor, Nanjing University of Science and Technology
Computer VisionMachine Learning
Pan Zhou
Pan Zhou
Assistant Professor at SMU
Machine LearningOptimizationComputer Vision
H
Hongsong Wang
School of Computer Science and Engineering, Southeast University, Nanjing 210096, China; Key Laboratory of New Generation Artificial Intelligence Technology and Its Interdisciplinary Applications (Southeast University), Ministry of Education, China