π€ AI Summary
This work addresses the challenges faced by multimodal conversational agents during reinforcement learning fine-tuning, namely the vast textual action space and the scarcity of image-text paired data. To overcome these limitations, the authors propose an implicit action space modeling approach that infers current latent actions by leveraging observational learning from future observations. They further enhance generalization by training a cross-modal projector using both image-text paired data and text-only data. A cycle-consistency loss is introduced to improve the projectorβs robustness and effectively expand coverage of the action space. Experimental results demonstrate that the proposed method significantly outperforms existing baselines on two dialogue tasks and exhibits consistent performance gains across multiple reinforcement learning algorithms.
π Abstract
Vision-language models are increasingly employed as multimodal conversational agents (MCAs) for diverse conversational tasks. Recently, reinforcement learning (RL) has been widely explored for adapting MCAs to various human-AI interaction scenarios. Despite showing great enhancement in generalization performance, fine-tuning MCAs via RL still faces challenges in handling the extremely large text token space. To address this, we learn a compact latent action space for RL fine-tuning instead. Specifically, we adopt the learning from observation mechanism to construct the codebook for the latent action space, where future observations are leveraged to estimate current latent actions that could further be used to reconstruct future observations. However, the scarcity of paired image-text data hinders learning a codebook with sufficient coverage. Thus, we leverage both paired image-text data and text-only data to construct the latent action space, using a cross-modal projector for transforming text embeddings into image-text embeddings. We initialize the cross-modal projector on paired image-text data, and further train it on massive text-only data with a novel cycle consistency loss to enhance its robustness. We show that our latent action based method outperforms competitive baselines on two conversation tasks across various RL algorithms.