Analyzing and Internalizing Complex Policy Documents for LLM Agents

๐Ÿ“… 2025-10-13
๐Ÿ“ˆ Citations: 0
โœจ Influential: 0
๐Ÿ“„ PDF
๐Ÿค– AI Summary
To address the high computational overhead and weak comprehension/execution capabilities of LLM agents when processing lengthy, complex policy documents, this paper proposes a systematic policy internalization framework. Methodologically, it introduces (1) CC-Gen, a controllable-complexity benchmark enabling multi-level policy evaluation; (2) CAP-CPT, a category-aware continual pretraining approach that jointly models factual, behavioral, and conditional policy types, integrated with an automated parsing pipeline and chain-of-thought annotation to reduce data curation and inference costs; and (3) joint optimization via supervised fine-tuning and autoregressive pretraining. Evaluated on Qwen-3-32B, the framework achieves +41% and +22% improvements in task accuracy, compresses prompt length by 97.3%, and significantly enhances generalization on tau-Benchโ€”despite using only minimal fine-tuning data.

Technology Category

Application Category

๐Ÿ“ Abstract
Large Language Model (LLM)-based agentic systems rely on in-context policy documents encoding diverse business rules. As requirements grow, these documents expand rapidly, causing high computational overhead. This motivates developing internalization methods that embed policy documents into model priors while preserving performance. Prior prompt compression work targets generic prompts, but agentic policy documents span multiple complexity levels and require deeper reasoning, making internalization harder. We introduce CC-Gen, an agentic benchmark generator with Controllable Complexity across four levels, enabling systematic evaluation of agents' ability to handle complexity and offering a unified framework for assessing policy internalization. Our analysis shows that complex policy specifications governing workflows pose major reasoning challenges. Supporting internalization with gold user agent interaction trajectories containing chain-of-thought (CoT) annotations via supervised fine-tuning (SFT) is data-intensive and degrades sharply as policy complexity increases. To mitigate data and reasoning burdens, we propose Category-Aware Policy Continued Pretraining (CAP-CPT). Our automated pipeline parses policy documents to extract key specifications, grouping them into factual, behavioral, and conditional categories, and isolating complex conditions that drive workflow complexity. This guides targeted data synthesis and enables agents to internalize policy information through an autoregressive pretraining loss. Experiments show CAP-CPT improves SFT baselines in all settings, with up to 41% and 22% gains on Qwen-3-32B, achieving 97.3% prompt length reduction on CC-Gen and further enhancing tau-Bench with minimal SFT data.
Problem

Research questions and friction points this paper is trying to address.

Addressing computational overhead from expanding policy documents in LLM agents
Developing methods to internalize complex multi-level policy specifications
Mitigating data-intensive reasoning challenges in policy-governed workflows
Innovation

Methods, ideas, or system contributions that make the work stand out.

Automated pipeline parses policy documents into categories
Category-aware continued pretraining internalizes policy information
Targeted data synthesis reduces reasoning and data burdens
๐Ÿ”Ž Similar Papers
No similar papers found.