đ¤ AI Summary
Existing deep search agents perform poorly on complex, real-world rules with fuzzy boundaries and implicit logic (e.g., legal, medical, or tariff regulations), and mainstream benchmarks lack systematic evaluation of such rule-application capabilities. Method: We introduce HSCodeCompâthe first expert-level benchmark for hierarchical rule applicationâgrounded in the World Customs Organizationâs Harmonized System (HS) nomenclature. It leverages noisy, real-world e-commerce product descriptions to require precise prediction of 10-digit HS codes. The benchmark integrates multi-level rule structures, human expert annotations, and realistic data noise to rigorously assess agent reasoning under dense regulatory constraints. Contribution/Results: Experiments reveal that state-of-the-art agents achieve only 46.8% top-1 accuracy on 10-digit code predictionâfar below human expertsâ 95.0%. Moreover, scaling model size yields negligible gains, exposing a fundamental bottleneck in deep semantic understanding and compositional reasoning over intricate, hierarchical rules.
đ Abstract
Effective deep search agents must not only access open-domain and domain-specific knowledge but also apply complex rules-such as legal clauses, medical manuals and tariff rules. These rules often feature vague boundaries and implicit logic relationships, making precise application challenging for agents. However, this critical capability is largely overlooked by current agent benchmarks.
To fill this gap, we introduce HSCodeComp, the first realistic, expert-level e-commerce benchmark designed to evaluate deep search agents in hierarchical rule application. In this task, the deep reasoning process of agents is guided by these rules to predict 10-digit Harmonized System Code (HSCode) of products with noisy but realistic descriptions. These codes, established by the World Customs Organization, are vital for global supply chain efficiency. Built from real-world data collected from large-scale e-commerce platforms, our proposed HSCodeComp comprises 632 product entries spanning diverse product categories, with these HSCodes annotated by several human experts.
Extensive experimental results on several state-of-the-art LLMs, open-source, and closed-source agents reveal a huge performance gap: best agent achieves only 46.8% 10-digit accuracy, far below human experts at 95.0%. Besides, detailed analysis demonstrates the challenges of hierarchical rule application, and test-time scaling fails to improve performance further.