Exploring Fine-Tuning for In-Context Retrieval and Efficient KV-Caching in Long-Context Language Models

📅 2026-01-26
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This study systematically investigates the joint effects of various fine-tuning strategies on long-context language models operating within million-token context windows, addressing their weak in-context retrieval capabilities, limited robustness to KV cache compression, and poor cross-domain generalization. For the first time, it evaluates the synergistic impact of fine-tuning on both in-context retrieval and robustness to KV cache compression. The results show that fine-tuning yields up to a 20-point improvement on in-domain tasks—such as a 9-point gain on financial question answering—though retrieval-augmented generation (RAG) remains superior for multiple-choice questions. Additionally, moderate gains in robustness to KV cache compression are observed, revealing a consistent pattern wherein fine-tuning enhances in-domain performance at the cost of widening the performance gap in out-of-domain settings.

Technology Category

Application Category

📝 Abstract
With context windows of millions of tokens, Long-Context Language Models (LCLMs) can encode entire document collections, offering a strong alternative to conventional retrieval-augmented generation (RAG). However, it remains unclear whether fine-tuning strategies can improve long-context performance and translate to greater robustness under KV-cache compression techniques. In this work, we investigate which training strategies most effectively enhance LCLMs'ability to identify and use relevant information, as well as enhancing their robustness under KV-cache compression. Our experiments show substantial in-domain improvements, achieving gains of up to +20 points over the base model. However, out-of-domain generalization remains task dependent with large variance -- LCLMs excels on finance questions (+9 points), while RAG shows stronger performance on multiple-choice questions (+6 points) over the baseline models. Finally, we show that our fine-tuning approaches bring moderate improvements in robustness under KV-cache compression, with gains varying across tasks.
Problem

Research questions and friction points this paper is trying to address.

Long-Context Language Models
Fine-Tuning
KV-Caching
Retrieval
Robustness
Innovation

Methods, ideas, or system contributions that make the work stand out.

fine-tuning
long-context language models
in-context retrieval
KV-cache compression
robustness
🔎 Similar Papers
No similar papers found.