Online Domain-aware LLM Decoding for Continual Domain Evolution

πŸ“… 2026-02-08
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
This work addresses the challenge that large language models struggle to adapt in real time to evolving domain knowledge and concept drift using conventional offline fine-tuning, leading to degraded generation quality. To overcome this limitation, the authors propose the Online Domain-aware Decoding (ODD) framework, which introduces online domain adaptation directly into the decoding phase for the first time. ODD dynamically fuses the base language model’s predictions with a trie-based prior at the probability level, modulating confidence through signals of divergence and continuity without requiring model retraining. The approach effectively mitigates concept drift and consistently outperforms existing baselines across multiple scenarios, achieving an absolute improvement of 0.065 in ROUGE-L and a relative gain of 13.6% in cosine similarity.

Technology Category

Application Category

πŸ“ Abstract
LLMs are typically fine-tuned offline on domain-specific data, assuming a static domain. In practice, domain knowledge evolves continuously through new regulations, products, services, and interaction patterns. Retraining or fine-tuning LLMs for every new instance is computationally infeasible. Additionally, real-world environments also exhibit temporal dynamics with shifting data distributions. Disregarding this phenomenon, commonly referred to as concept drift, can significantly diminish a model's predictive accuracy. This mismatch between evolving domains and static adaptation pipelines highlights the need for efficient, real-time adaptation without costly retraining. In response, we introduce Online Domain-aware Decoding framework (ODD). ODD performs probability-level fusion between a base LLM and a prefix-tree prior, guided by adaptive confidence modulation using disagreement and continuity signals. Empirical evaluation under diverse drift scenarios demonstrates that ODD consistently surpasses LLM-Greedy and LLM-Temp Scaled across all syntactic and semantic NLG metrics. It yields an absolute ROUGE-L gain of 0.065 and a 13.6% relative improvement in Cosine Similarity over the best baseline. These results demonstrate ODD's robustness to evolving lexical and contextual patterns, making it suitable for dynamic LLM applications.
Problem

Research questions and friction points this paper is trying to address.

domain evolution
concept drift
large language models
real-time adaptation
temporal dynamics
Innovation

Methods, ideas, or system contributions that make the work stand out.

online adaptation
domain evolution
concept drift
prefix-tree prior
confidence modulation
πŸ”Ž Similar Papers
No similar papers found.
M
Mohammad Abu-Shaira
University of North Texas, Denton, TX, USA
Weishi Shi
Weishi Shi
University of North Texas
Data miningMachine learningActive learning.