🤖 AI Summary
Evaluating the robustness of function calling in multi-turn LLM dialogues under realistic mobile scenarios remains challenging due to dynamic user behavior and environmental constraints. Method: This paper introduces the first fine-grained, mobile-oriented benchmark for function-call evaluation. It proposes a hybrid data construction paradigm combining real-world user logs with open-source model-generated dialogues, and designs a turn-level interactive snapshot assessment mechanism that enables dynamic trajectory tracking and parameter-level error attribution. Contribution/Results: It is the first work to enable precise diagnosis of complex phenomena—including imperfect instructions, intent drift, and pronoun reference—within mobile dialogue contexts. Empirical analysis reveals that parameter-name errors (e.g., spelling mistakes, case mismatches, and abbreviations) constitute the dominant cause of cross-scenario failures. The benchmark thus provides an interpretable, localizable foundation for diagnosing and improving the robustness of mobile-assistant LLMs.
📝 Abstract
Evaluating the performance of LLMs in multi-turn human-agent interactions presents significant challenges, particularly due to the complexity and variability of user behavior. In this paper, we introduce HammerBench, a novel benchmark framework for assessing LLMs' function-calling capabilities in real-world, multi-turn dialogues. HammerBench simulates diverse mobile assistant use cases, incorporating imperfect instructions, dynamic question-answer trajectories, intent and argument shifts, and the indirect use of external information through pronouns. To construct this benchmark, we curate a comprehensive dataset derived from popular mobile app functionalities and anonymized user logs, complemented by a cost-effective data generation pipeline leveraging open-source models. HammerBench is further augmented with fine-grained interaction snapshots and metrics, enabling detailed evaluation of function-calling performance across individual conversational turns. We demonstrate the effectiveness of HammerBench by evaluating several leading LLMs and uncovering key performance trends. Our experiments reveal that different types of parameter name errors are a significant source of failure across different interaction scenarios, highlighting critical areas for further improvement in LLM robustness for mobile assistant applications.