๐ค AI Summary
To address the high computational overhead and weak comprehension/execution capabilities of LLM agents when processing lengthy, complex policy documents, this paper proposes a systematic policy internalization framework. Methodologically, it introduces (1) CC-Gen, a controllable-complexity benchmark enabling multi-level policy evaluation; (2) CAP-CPT, a category-aware continual pretraining approach that jointly models factual, behavioral, and conditional policy types, integrated with an automated parsing pipeline and chain-of-thought annotation to reduce data curation and inference costs; and (3) joint optimization via supervised fine-tuning and autoregressive pretraining. Evaluated on Qwen-3-32B, the framework achieves +41% and +22% improvements in task accuracy, compresses prompt length by 97.3%, and significantly enhances generalization on tau-Benchโdespite using only minimal fine-tuning data.
๐ Abstract
Large Language Model (LLM)-based agentic systems rely on in-context policy documents encoding diverse business rules. As requirements grow, these documents expand rapidly, causing high computational overhead. This motivates developing internalization methods that embed policy documents into model priors while preserving performance. Prior prompt compression work targets generic prompts, but agentic policy documents span multiple complexity levels and require deeper reasoning, making internalization harder. We introduce CC-Gen, an agentic benchmark generator with Controllable Complexity across four levels, enabling systematic evaluation of agents' ability to handle complexity and offering a unified framework for assessing policy internalization. Our analysis shows that complex policy specifications governing workflows pose major reasoning challenges. Supporting internalization with gold user agent interaction trajectories containing chain-of-thought (CoT) annotations via supervised fine-tuning (SFT) is data-intensive and degrades sharply as policy complexity increases. To mitigate data and reasoning burdens, we propose Category-Aware Policy Continued Pretraining (CAP-CPT). Our automated pipeline parses policy documents to extract key specifications, grouping them into factual, behavioral, and conditional categories, and isolating complex conditions that drive workflow complexity. This guides targeted data synthesis and enables agents to internalize policy information through an autoregressive pretraining loss. Experiments show CAP-CPT improves SFT baselines in all settings, with up to 41% and 22% gains on Qwen-3-32B, achieving 97.3% prompt length reduction on CC-Gen and further enhancing tau-Bench with minimal SFT data.