i-LAVA: Insights on Low Latency Voice-2-Voice Architecture for Agents

📅 2025-09-25
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the challenge of balancing low latency and high interaction quality in real-time voice-to-voice (V2V) communication, this paper proposes an end-to-end optimized architecture integrating ASR, dialogue management, and TTS modules, enhanced by a context- and prosody-aware CSM1b dialogue modeling method. It presents the first systematic analysis of the trade-off between residual vector quantization (RVQ) iteration count and codebook size—quantifying their impact on real-time performance and speech quality—and jointly optimizes RVQ with the Mimi audio codec. Experiments demonstrate that, while maintaining speech naturalness (MOS ≥ 3.8), the approach reduces real-time factor (RTF) by 42% and compresses end-to-end latency to <300 ms, significantly improving conversational responsiveness. The core contributions are: (1) establishing a lightweight, interaction-oriented V2V modeling paradigm; and (2) empirically quantifying the critical influence of RVQ configuration on system-level performance.

Technology Category

Application Category

📝 Abstract
We experiment with a low-latency, end-to-end voice-to-voice communication model to optimize it for real-time conversational applications. By analyzing components essential to voice to voice (V-2-V) system viz. automatic speech recognition (ASR), text-to-speech (TTS), and dialog management, our work analyzes how to reduce processing time while maintaining high-quality interactions to identify the levers for optimizing V-2-V system. Our work identifies that TTS component which generates life-like voice, full of emotions including natural pauses and exclamations has highest impact on Real time factor (RTF). The experimented V-2-V architecture utilizes CSM1b has the capability to understand tone as well as context of conversation by ingesting both audio and text of prior exchanges to generate contextually accurate speech. We explored optimization of Residual Vector Quantization (RVQ) iterations by the TTS decoder which come at a cost of decrease in the quality of voice generated. Our experimental evaluations also demonstrate that for V-2-V implementations based on CSM most important optimizations can be brought by reducing the number of RVQ Iterations along with the codebooks used in Mimi.
Problem

Research questions and friction points this paper is trying to address.

Optimizing low-latency voice-to-voice systems for real-time conversation applications
Reducing processing time while maintaining high-quality voice interactions
Balancing TTS quality and latency by optimizing RVQ iterations and codebooks
Innovation

Methods, ideas, or system contributions that make the work stand out.

Optimizes TTS component for real-time factor
Uses CSM1b model for contextual speech generation
Reduces RVQ iterations to lower latency
🔎 Similar Papers
No similar papers found.