🤖 AI Summary
This work addresses a critical failure mode in multilingual tool use by large language models (LLMs): even when intent understanding is correct, execution often fails due to mismatches between the language of parameter values and the expected format of tool specifications. To systematically investigate this issue, the authors introduce MLCL, the first multilingual tool-use benchmark covering Chinese, Hindi, and the low-resource Igbo language. Through fine-grained error analysis, they identify “parameter value language mismatch” as a key source of failure. Applying reasoning-stage strategies—such as language normalization and rewriting—significantly reduces language-induced execution errors. Nevertheless, performance still falls short of English-level accuracy, revealing fundamental limitations of current LLMs in multilingual tool invocation scenarios.
📝 Abstract
Large Language Models (LLMs) are increasingly deployed as agents that invoke external tools through structured function calls. While recent work reports strong tool-calling performance under standard English-centric evaluations, the robustness of tool calling under multilingual user interactions remains underexplored. In this work, we introduce MLCL, a diagnostic benchmark, and conduct a systematic evaluation of multilingual tool calling across Chinese, Hindi, and the low-resource language Igbo. Through fine-grained error analysis, we show that many failures occur despite correct intent understanding and tool selection. We identify parameter value language mismatch as a dominant failure mode, where models generate semantically appropriate parameter values in the user's language, violating language-invariant execution conventions. We further evaluate several inference-time system strategies and find that while these strategies substantially reduce language-induced execution errors, none of them can fully recover English-level performance.