EmbeddingRWKV: State-Centric Retrieval with Reusable States

πŸ“… 2026-01-10
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
This work addresses the inefficiency and redundant computation in traditional retrieval-augmented generation (RAG) systems caused by the decoupling of embedding and reranking stages. The authors propose a state-centric retrieval paradigm by fine-tuning the RWKV large language model to create EmbeddingRWKV, which introduces a compact, reusable internal β€œstate” that serves as an information bridge between the two stages, thereby unifying embedding and reranking functionalities. Innovatively, reranking is performed only on query tokens, combined with a sparse-layer state extraction and uniform-layer selection strategy. This approach achieves 98.62% of full-model performance while utilizing only 25% of the model layers, yielding a 5.4–44.8Γ— speedup in reranking and substantially improving overall system efficiency.

Technology Category

Application Category

πŸ“ Abstract
Current Retrieval-Augmented Generation (RAG) systems typically employ a traditional two-stage pipeline: an embedding model for initial retrieval followed by a reranker for refinement. However, this paradigm suffers from significant inefficiency due to the lack of shared information between stages, leading to substantial redundant computation. To address this limitation, we propose \textbf{State-Centric Retrieval}, a unified retrieval paradigm that utilizes"states"as a bridge to connect embedding models and rerankers. First, we perform state representation learning by fine-tuning an RWKV-based LLM, transforming it into \textbf{EmbeddingRWKV}, a unified model that serves as both an embedding model and a state backbone for extracting compact, reusable states. Building upon these reusable states, we further design a state-based reranker to fully leverage precomputed information. During reranking, the model processes only query tokens, decoupling inference cost from document length and yielding a 5.4$\times$--44.8$\times$ speedup. Furthermore, we observe that retaining all intermediate layer states is unnecessary; with a uniform layer selection strategy, our model maintains 98.62\% of full-model performance using only 25\% of the layers. Extensive experiments demonstrate that State-Centric Retrieval achieves high-quality retrieval and reranking results while significantly enhancing overall system efficiency. Code is available at \href{https://github.com/howard-hou/EmbeddingRWKV}{our GitHub repository}.
Problem

Research questions and friction points this paper is trying to address.

Retrieval-Augmented Generation
embedding model
reranker
redundant computation
system efficiency
Innovation

Methods, ideas, or system contributions that make the work stand out.

State-Centric Retrieval
EmbeddingRWKV
Reusable States
Efficient Reranking
RWKV-based LLM
πŸ”Ž Similar Papers
No similar papers found.