🤖 AI Summary
This work addresses the high latency introduced by retrieval-augmented generation (RAG) in real-time spoken dialogue systems by proposing the first dual-agent RAG architecture tailored for this setting. The approach decouples retrieval and generation: a background “slow-thinking” agent proactively prefetches relevant documents based on dialogue context prediction and stores them in a semantic cache, while a foreground “fast-speaking” agent generates responses exclusively from this cache. Leveraging predictive prefetching and an efficient caching mechanism, the system bypasses vector database queries entirely upon cache hits, achieving sub-millisecond response times. Experimental results demonstrate that this method substantially alleviates the latency bottleneck of conventional RAG pipelines in real-time voice applications.
📝 Abstract
We present VoiceAgentRAG, an open-source dual-agent memory router that decouples retrieval from response generation. A background Slow Thinker agent continuously monitors the conversation stream, predicts likely follow-up topics using an LLM, and pre-fetches relevant document chunks into a FAISS-backed semantic cache. A foreground Fast Talker agent reads only from this sub-millisecond cache, bypassing the vector database entirely on cache hits.