TRUEBench: Can LLM Response Meet Real-world Constraints as Productivity Assistant?

📅 2025-09-24
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing LLM productivity assistant benchmarks suffer from three key limitations: insufficient multilingual coverage, inadequate modeling of implicit constraints, and neglect of multi-turn dialogue complexity. To address these gaps, we propose TRUEBench—the first multilingual, multi-turn benchmark explicitly designed to evaluate real-world instruction-following capability under both explicit and implicit constraints. Our contributions are threefold: (1) cross-lingual instruction generation across 12 languages; (2) a cumulative-constraint multi-turn dialogue framework with joint explicit–implicit constraint evaluation criteria; and (3) an LLM-based validator to enhance annotation reliability for constraint consistency. Experimental results reveal significant performance bottlenecks in state-of-the-art models—e.g., OpenAI o1 achieves only 69.07% overall pass rate—demonstrating TRUEBench’s high difficulty and practical validity for realistic assessment.

Technology Category

Application Category

📝 Abstract
Large language models (LLMs) are increasingly integral as productivity assistants, but existing benchmarks fall short in rigorously evaluating their real-world instruction-following capabilities. Current benchmarks often (i) lack sufficient multilinguality, (ii) fail to capture the implicit constraints inherent in user requests, and (iii) overlook the complexities of multi-turn dialogue. To address these critical gaps and provide a more realistic assessment, we introduce TRUEBench (Trustworthy Real-world Usage Evaluation Benchmark)1, a novel benchmark specifically designed for LLM-based productivity assistants. TRUEBench distinguishes itself by featuring input prompts across 12 languages, incorporating intra-instance multilingual instructions, employing rigorous evaluation criteria to capture both explicit and implicit constraints, and including complex multi-turn dialogue scenarios with both accumulating constraints and context switches. Furthermore, to ensure reliability in evaluation, we refined constraints using an LLM validator. Extensive experiments demonstrate that TRUEBench presents significantly greater challenges than existing benchmarks; for instance, a strong model like OpenAI o1 achieved only a 69.07% overall pass rate. TRUEBench offers a demanding and realistic assessment of LLMs in practical productivity settings, highlighting their capabilities and limitations.
Problem

Research questions and friction points this paper is trying to address.

Evaluating LLMs' real-world instruction-following with implicit constraints
Addressing multilingual gaps and complex multi-turn dialogue scenarios
Assessing productivity assistants' performance under realistic usage conditions
Innovation

Methods, ideas, or system contributions that make the work stand out.

Multilingual prompts across 12 languages
Rigorous evaluation capturing implicit constraints
Complex multi-turn dialogue scenarios
🔎 Similar Papers
No similar papers found.