🤖 AI Summary
To address the low accuracy, high inference overhead, and poor compatibility with lightweight architectures of small-scale large language models (LLMs) in resource-constrained edge environments for domain-specific retrieval-augmented generation (RAG), this paper proposes Chain-of-Rank (CoR), a novel paradigm that replaces conventional chain-of-thought reasoning with lightweight reliability ranking of retrieved documents—thereby jointly optimizing inference efficiency and domain adaptability. CoR integrates a domain-finetuned lightweight LLM, fine-grained document relevance scoring, a low-overhead retrieval fusion architecture, and an edge-friendly RAG pipeline. Evaluated across multiple domain-specific RAG benchmarks, CoR achieves state-of-the-art performance while reducing inference latency by 42% and memory footprint by 37%, significantly enhancing feasibility for edge deployment.
📝 Abstract
Retrieval-augmented generation (RAG) with large language models (LLMs) is especially valuable in specialized domains, where precision is critical. To more specialize the LLMs into a target domain, domain-specific RAG has recently been developed by allowing the LLM to access the target domain early via finetuning. The domain-specific RAG makes more sense in resource-constrained environments like edge devices, as they should perform a specific task (e.g. personalization) reliably using only small-scale LLMs. While the domain-specific RAG is well-aligned with edge devices in this respect, it often relies on widely-used reasoning techniques like chain-of-thought (CoT). The reasoning step is useful to understand the given external knowledge, and yet it is computationally expensive and difficult for small-scale LLMs to learn it. Tackling this, we propose the Chain of Rank (CoR) which shifts the focus from intricate lengthy reasoning to simple ranking of the reliability of input external documents. Then, CoR reduces computational complexity while maintaining high accuracy, making it particularly suited for resource-constrained environments. We attain the state-of-the-art (SOTA) results in benchmarks, and analyze its efficacy.