Vision-Speech Models: Teaching Speech Models to Converse about Images

📅 2025-03-19
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work introduces MoshiVis, the first vision–speech multimodal large language model enabling real-time image–speech dialogue. To address the limited visual understanding capability of pretrained speech models, we design a dynamic gating mechanism that adaptively switches between visual input and pure-speech topics. We further propose a single-stage, parameter-efficient joint fine-tuning paradigm for image–text and image–speech modalities, integrating lightweight adapter modules (e.g., LoRA/Adapter), cross-modal alignment, and joint speech–image representation learning. On comprehensive multi-task visual understanding benchmarks, MoshiVis significantly outperforms baseline models. It supports high-fidelity, low-latency image–speech interaction under both audio and text prompts. The code and a dedicated image–speech evaluation dataset are publicly released.

Technology Category

Application Category

📝 Abstract
The recent successes of Vision-Language models raise the question of how to equivalently imbue a pretrained speech model with vision understanding, an important milestone towards building a multimodal speech model able to freely converse about images. Building such a conversational Vision-Speech model brings its unique challenges: (i) paired image-speech datasets are much scarcer than their image-text counterparts, (ii) ensuring real-time latency at inference is crucial thus bringing compute and memory constraints, and (iii) the model should preserve prosodic features (e.g., speaker tone) which cannot be inferred from text alone. In this work, we introduce MoshiVis, augmenting a recent dialogue speech LLM, Moshi, with visual inputs through lightweight adaptation modules. An additional dynamic gating mechanism enables the model to more easily switch between the visual inputs and unrelated conversation topics. To reduce training costs, we design a simple one-stage, parameter-efficient fine-tuning pipeline in which we leverage a mixture of image-text (i.e.,"speechless") and image-speech samples. We evaluate the model on downstream visual understanding tasks with both audio and text prompts, and report qualitative samples of interactions with MoshiVis. Our inference code will be made available, as well as the image-speech data used for audio evaluation.
Problem

Research questions and friction points this paper is trying to address.

Develop a Vision-Speech model for image-based conversations
Address scarcity of paired image-speech datasets
Ensure real-time latency and preserve prosodic features
Innovation

Methods, ideas, or system contributions that make the work stand out.

Lightweight adaptation modules for visual inputs
Dynamic gating mechanism for topic switching
Parameter-efficient fine-tuning with mixed datasets
🔎 Similar Papers
No similar papers found.
A
Am'elie Royer
Kyutai
M
Moritz Bohle
Kyutai
G
Gabriel de Marmiesse
Kyutai
L
Laurent Mazar'e
Kyutai
Neil Zeghidour
Neil Zeghidour
Kyutai
Machine LearningSpeech RecognitionAudio Understanding
A
Alexandre D'efossez
Kyutai
P
Patrick P'erez
Kyutai