Controlling Multimodal Conversational Agents with Coverage-Enhanced Latent Actions

πŸ“… 2026-01-12
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
This work addresses the challenges faced by multimodal conversational agents during reinforcement learning fine-tuning, namely the vast textual action space and the scarcity of image-text paired data. To overcome these limitations, the authors propose an implicit action space modeling approach that infers current latent actions by leveraging observational learning from future observations. They further enhance generalization by training a cross-modal projector using both image-text paired data and text-only data. A cycle-consistency loss is introduced to improve the projector’s robustness and effectively expand coverage of the action space. Experimental results demonstrate that the proposed method significantly outperforms existing baselines on two dialogue tasks and exhibits consistent performance gains across multiple reinforcement learning algorithms.

Technology Category

Application Category

πŸ“ Abstract
Vision-language models are increasingly employed as multimodal conversational agents (MCAs) for diverse conversational tasks. Recently, reinforcement learning (RL) has been widely explored for adapting MCAs to various human-AI interaction scenarios. Despite showing great enhancement in generalization performance, fine-tuning MCAs via RL still faces challenges in handling the extremely large text token space. To address this, we learn a compact latent action space for RL fine-tuning instead. Specifically, we adopt the learning from observation mechanism to construct the codebook for the latent action space, where future observations are leveraged to estimate current latent actions that could further be used to reconstruct future observations. However, the scarcity of paired image-text data hinders learning a codebook with sufficient coverage. Thus, we leverage both paired image-text data and text-only data to construct the latent action space, using a cross-modal projector for transforming text embeddings into image-text embeddings. We initialize the cross-modal projector on paired image-text data, and further train it on massive text-only data with a novel cycle consistency loss to enhance its robustness. We show that our latent action based method outperforms competitive baselines on two conversation tasks across various RL algorithms.
Problem

Research questions and friction points this paper is trying to address.

multimodal conversational agents
reinforcement learning
latent action space
image-text data scarcity
large text token space
Innovation

Methods, ideas, or system contributions that make the work stand out.

latent action space
multimodal conversational agents
reinforcement learning
cross-modal projector
cycle consistency loss
πŸ”Ž Similar Papers
No similar papers found.