🤖 AI Summary
This work addresses the limited generalization and suboptimal performance of enterprise search agents on complex, hard-to-verify tasks by proposing a multi-task reinforcement learning (RL) framework for knowledge-augmented agent training. The approach integrates long-horizon reasoning with tool use, constructs an iterative self-bootstrapping pipeline for synthetic data generation, and employs large-batch off-policy RL to enable efficient multi-task post-training with strong out-of-distribution generalization. Key contributions include KARLBench—an evaluation suite for agent reasoning—alongside cross-task generalization strategies and a novel multi-task RL post-training paradigm. Experiments demonstrate that the method achieves Pareto optimality on KARLBench, significantly outperforming Claude 4.6 and GPT 5.2 while excelling in cost–quality and latency–quality trade-offs and maintaining robust generalization on out-of-distribution tasks.
📝 Abstract
We present a system for training enterprise search agents via reinforcement learning that achieves state-of-the-art performance across a diverse suite of hard-to-verify agentic search tasks. Our work makes four core contributions. First, we introduce KARLBench, a multi-capability evaluation suite spanning six distinct search regimes, including constraint-driven entity search, cross-document report synthesis, tabular numerical reasoning, exhaustive entity retrieval, procedural reasoning over technical documentation, and fact aggregation over internal enterprise notes. Second, we show that models trained across heterogeneous search behaviors generalize substantially better than those optimized for any single benchmark. Third, we develop an agentic synthesis pipeline that employs long-horizon reasoning and tool use to generate diverse, grounded, and high-quality training data, with iterative bootstrapping from increasingly capable models. Fourth, we propose a new post-training paradigm based on iterative large-batch off-policy RL that is sample efficient, robust to train-inference engine discrepancies, and naturally extends to multi-task training with out-of-distribution generalization. Compared to Claude 4.6 and GPT 5.2, KARL is Pareto-optimal on KARLBench across cost-quality and latency-quality trade-offs, including tasks that were out-of-distribution during training. With sufficient test-time compute, it surpasses the strongest closed models. These results show that tailored synthetic data in combination with multi-task reinforcement learning enables cost-efficient and high-performing knowledge agents for grounded reasoning.