🤖 AI Summary
Existing LLM productivity assistant benchmarks suffer from three key limitations: insufficient multilingual coverage, inadequate modeling of implicit constraints, and neglect of multi-turn dialogue complexity. To address these gaps, we propose TRUEBench—the first multilingual, multi-turn benchmark explicitly designed to evaluate real-world instruction-following capability under both explicit and implicit constraints. Our contributions are threefold: (1) cross-lingual instruction generation across 12 languages; (2) a cumulative-constraint multi-turn dialogue framework with joint explicit–implicit constraint evaluation criteria; and (3) an LLM-based validator to enhance annotation reliability for constraint consistency. Experimental results reveal significant performance bottlenecks in state-of-the-art models—e.g., OpenAI o1 achieves only 69.07% overall pass rate—demonstrating TRUEBench’s high difficulty and practical validity for realistic assessment.
📝 Abstract
Large language models (LLMs) are increasingly integral as productivity assistants, but existing benchmarks fall short in rigorously evaluating their real-world instruction-following capabilities. Current benchmarks often (i) lack sufficient multilinguality, (ii) fail to capture the implicit constraints inherent in user requests, and (iii) overlook the complexities of multi-turn dialogue. To address these critical gaps and provide a more realistic assessment, we introduce TRUEBench (Trustworthy Real-world Usage Evaluation Benchmark)1, a novel benchmark specifically designed for LLM-based productivity assistants. TRUEBench distinguishes itself by featuring input prompts across 12 languages, incorporating intra-instance multilingual instructions, employing rigorous evaluation criteria to capture both explicit and implicit constraints, and including complex multi-turn dialogue scenarios with both accumulating constraints and context switches. Furthermore, to ensure reliability in evaluation, we refined constraints using an LLM validator. Extensive experiments demonstrate that TRUEBench presents significantly greater challenges than existing benchmarks; for instance, a strong model like OpenAI o1 achieved only a 69.07% overall pass rate. TRUEBench offers a demanding and realistic assessment of LLMs in practical productivity settings, highlighting their capabilities and limitations.