ChipChat: Low-Latency Cascaded Conversational Agent in MLX

📅 2025-08-26
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
To address the high latency inherent in conventional cascaded on-device voice assistants due to sequential module processing, this paper proposes ChipChat—a low-latency streaming architecture. Methodologically: (i) we design a streaming Mixture-of-Experts ASR model enabling fine-grained joint optimization of speech recognition and language understanding; (ii) we introduce a state-action-augmented lightweight LLM inference mechanism to enhance real-time intent comprehension; and (iii) we integrate an end-to-end neural vocoder with personalized speaker modeling to improve TTS quality and efficiency. Fully deployed locally using the MLX framework—without requiring a discrete GPU—ChipChat achieves end-to-end response latency under 1 second on a Mac Studio, thereby ensuring strong privacy guarantees, real-time interactivity, and practical usability. Our approach significantly breaks the latency bottleneck of cascaded systems while maintaining on-device execution.

Technology Category

Application Category

📝 Abstract
The emergence of large language models (LLMs) has transformed spoken dialog systems, yet the optimal architecture for real-time on-device voice agents remains an open question. While end-to-end approaches promise theoretical advantages, cascaded systems (CSs) continue to outperform them in language understanding tasks, despite being constrained by sequential processing latency. In this work, we introduce ChipChat, a novel low-latency CS that overcomes traditional bottlenecks through architectural innovations and streaming optimizations. Our system integrates streaming (a) conversational speech recognition with mixture-of-experts, (b) state-action augmented LLM, (c) text-to-speech synthesis, (d) neural vocoder, and (e) speaker modeling. Implemented using MLX, ChipChat achieves sub-second response latency on a Mac Studio without dedicated GPUs, while preserving user privacy through complete on-device processing. Our work shows that strategically redesigned CSs can overcome their historical latency limitations, offering a promising path forward for practical voice-based AI agents.
Problem

Research questions and friction points this paper is trying to address.

Optimizing real-time on-device voice agent architecture
Reducing cascaded system latency for conversational AI
Achieving privacy-preserving on-device processing without GPUs
Innovation

Methods, ideas, or system contributions that make the work stand out.

Streaming speech recognition with mixture-of-experts
State-action augmented large language model
Complete on-device processing with MLX implementation
🔎 Similar Papers
No similar papers found.