Multimodal Policy Internalization for Conversational Agents

📅 2025-10-10
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Existing multimodal conversational agents rely on lengthy prompts to enforce predefined policies—such as tool invocation, stylistic constraints, and visual behavior norms—resulting in high inference overhead, poor generalization, and no mechanism for policy internalization. Method: This paper introduces the novel task of *multimodal policy internalization*. We construct a dual-modal (text–vision) policy dataset spanning synthetic and real-world scenarios and propose TriMPI, a three-stage training framework comprising continual pretraining, supervised fine-tuning, and policy-aware reinforcement learning. Central to the framework is PolicyRollout—a GRPO-style RL method that leverages policy feedback for grounded response exploration. Contribution/Results: Experiments demonstrate significant improvements in end-to-end policy adherence accuracy, cross-scenario generalization, and catastrophic forgetting resistance. Our work establishes the first reproducible training paradigm and systematic evaluation benchmark for multimodal policy alignment.

Technology Category

Application Category

📝 Abstract
Modern conversational agents like ChatGPT and Alexa+ rely on predefined policies specifying metadata, response styles, and tool-usage rules. As these LLM-based systems expand to support diverse business and user queries, such policies, often implemented as in-context prompts, are becoming increasingly complex and lengthy, making faithful adherence difficult and imposing large fixed computational costs. With the rise of multimodal agents, policies that govern visual and multimodal behaviors are critical but remain understudied. Prior prompt-compression work mainly shortens task templates and demonstrations, while existing policy-alignment studies focus only on text-based safety rules. We introduce Multimodal Policy Internalization (MPI), a new task that internalizes reasoning-intensive multimodal policies into model parameters, enabling stronger policy-following without including the policy during inference. MPI poses unique data and algorithmic challenges. We build two datasets spanning synthetic and real-world decision-making and tool-using tasks and propose TriMPI, a three-stage training framework. TriMPI first injects policy knowledge via continual pretraining, then performs supervised finetuning, and finally applies PolicyRollout, a GRPO-style reinforcement learning extension that augments rollouts with policy-aware responses for grounded exploration. TriMPI achieves notable gains in end-to-end accuracy, generalization, and robustness to forgetting. As the first work on multimodal policy internalization, we provide datasets, training recipes, and comprehensive evaluations to foster future research. Project page: https://mikewangwzhl.github.io/TriMPI.
Problem

Research questions and friction points this paper is trying to address.

Internalizing complex multimodal policies into model parameters for conversational agents
Addressing policy adherence difficulties and high computational costs in LLM systems
Developing methods to handle multimodal behaviors beyond text-based safety rules
Innovation

Methods, ideas, or system contributions that make the work stand out.

Internalizes multimodal policies into model parameters
Uses three-stage training with pretraining and finetuning
Applies PolicyRollout reinforcement learning for policy exploration
🔎 Similar Papers
No similar papers found.