CCTU: A Benchmark for Tool Use under Complex Constraints

📅 2026-03-16
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Current large language models lack systematic evaluation of their tool-use capabilities under complex explicit constraints. This work proposes the first benchmark framework specifically designed for assessing tool use under such conditions, introducing a comprehensive set of 12 constraint types across four dimensions—resources, actions, toolset, and responses—and comprising 200 challenging multi-turn interaction scenarios. The framework includes an executable automated validation module enabling fine-grained compliance checking. Evaluations of nine mainstream models reveal that, under strict adherence to all constraints, task success rates remain below 20%, with over 50% of cases exhibiting constraint violations. Moreover, models struggle to effectively self-correct through feedback, highlighting a significant deficiency in current systems’ ability to comply with complex constraints.

Technology Category

Application Category

📝 Abstract
Solving problems through tool use under explicit constraints constitutes a highly challenging yet unavoidable scenario for large language models (LLMs), requiring capabilities such as function calling, instruction following, and self-refinement. However, progress has been hindered by the absence of dedicated evaluations. To address this, we introduce CCTU, a benchmark for evaluating LLM tool use under complex constraints. CCTU is grounded in a taxonomy of 12 constraint categories spanning four dimensions (i.e., resource, behavior, toolset, and response). The benchmark comprises 200 carefully curated and challenging test cases across diverse tool-use scenarios, each involving an average of seven constraint types and an average prompt length exceeding 4,700 tokens. To enable reliable evaluation, we develop an executable constraint validation module that performs step-level validation and enforces compliance during multi-turn interactions between models and their environments. We evaluate nine state-of-the-art LLMs in both thinking and non-thinking modes. Results indicate that when strict adherence to all constraints is required, no model achieves a task completion rate above 20%. Further analysis reveals that models violate constraints in over 50% of cases, particularly in the resource and response dimensions. Moreover, LLMs demonstrate limited capacity for self-refinement even after receiving detailed feedback on constraint violations, highlighting a critical bottleneck in the development of robust tool-use agents. To facilitate future research, we release the data and code.
Problem

Research questions and friction points this paper is trying to address.

tool use
complex constraints
large language models
benchmark
constraint validation
Innovation

Methods, ideas, or system contributions that make the work stand out.

tool use
complex constraints
benchmark
constraint validation
large language models
🔎 Similar Papers
No similar papers found.
Junjie Ye
Junjie Ye
Fudan University
Computer ScienceNatural Language ProcessingLarge Language ModelsTool Learning
G
Guoqiang Zhang
College of Computer Science and Artificial Intelligence, Fudan University
Wenjie Fu
Wenjie Fu
Ph.D, Southeast University
VLSI design and test automation
T
Tao Gui
College of Computer Science and Artificial Intelligence, Fudan University
Qi Zhang
Qi Zhang
Fudan University
SAGINsatellite routing
X
Xuanjing Huang
College of Computer Science and Artificial Intelligence, Fudan University