Halo: Domain-Aware Query Optimization for Long-Context Question Answering

📅 2026-03-18
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenges of degraded answer quality and high API costs in existing long-context question answering systems, which also struggle to effectively leverage user-provided domain knowledge. To overcome these limitations, the authors propose Halo, a novel framework that systematically extracts and structures domain knowledge from user prompts into three types of executable operators, integrated into a multi-stage query pipeline encompassing document pruning, chunk filtering, and answer ranking. Halo unifies natural language understanding, query optimization, and retrieval-augmented generation, and incorporates a runtime fallback mechanism to handle inaccuracies in provided knowledge. Experiments demonstrate that Halo improves accuracy by up to 13% over baselines across financial, literary, and scientific datasets, reduces costs by 4.8×, and enables lightweight models to approach the performance of state-of-the-art large models at 78× lower computational expense.

Technology Category

Application Category

📝 Abstract
Long-context question answering (QA) over lengthy documents is critical for applications such as financial analysis, legal review, and scientific research. Current approaches, such as processing entire documents via a single LLM call or retrieving relevant chunks via RAG have two drawbacks: First, as context size increases, response quality can degrade, impacting accuracy. Second, iteratively processing hundreds of input documents can incur prohibitively high costs in API calls. To improve response quality and reduce the number of iterations needed to get the desired response, users tend to add domain knowledge to their prompts. However, existing systems fail to systematically capture and use this knowledge to guide query processing. Domain knowledge is treated as prompt tokens alongside the document: the LLM may or may not follow it, there is no reduction in computational cost, and when outputs are incorrect, users must manually iterate. We present Halo, a long-context QA framework that automatically extracts domain knowledge from user prompts and applies it as executable operators across a multi-stage query execution pipeline. Halo identifies three common forms of domain knowledge - where in the document to look, what content to ignore, and how to verify the answer - and applies each at the pipeline stage where it is most effective: pruning the document before chunk selection, filtering irrelevant chunks before inference, and ranking candidate responses after generation. To handle imprecise or invalid domain knowledge, Halo includes a fallback mechanism that detects low-quality operators at runtime and selectively disables them. Our evaluation across finance, literature, and scientific datasets shows that Halo achieves up to 13% higher accuracy and 4.8x lower cost compared to baselines, and enables a lightweight open-source model to approach frontier LLM accuracy at 78x lower cost.
Problem

Research questions and friction points this paper is trying to address.

long-context question answering
domain knowledge
query optimization
retrieval-augmented generation
computational cost
Innovation

Methods, ideas, or system contributions that make the work stand out.

domain-aware query optimization
long-context QA
executable domain knowledge
multi-stage query pipeline
cost-efficient LLM inference
🔎 Similar Papers
No similar papers found.