Context-Agent: Dynamic Discourse Trees for Non-Linear Dialogue

📅 2026-04-07
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses the challenge of maintaining coherence in large language models during nonlinear, hierarchical, and multi-branch human conversations, where inefficient context utilization often leads to degraded performance. To this end, the authors propose Context-Agent, a dynamic discourse tree framework that models multi-turn dialogues as an expandable tree structure, aligning with the inherent nonlinearity of natural conversation and enabling effective context maintenance and navigation across topical branches. Additionally, they introduce NTM, the first benchmark specifically designed for evaluating nonlinear long-range dialogue systems, thereby moving beyond conventional linear modeling paradigms. Experimental results demonstrate that the proposed approach significantly improves task completion rates and token efficiency across multiple large language models, substantiating the effectiveness of structured context management in complex, dynamic conversational settings.
📝 Abstract
Large Language Models demonstrate outstanding performance in many language tasks but still face fundamental challenges in managing the non-linear flow of human conversation. The prevalent approach of treating dialogue history as a flat, linear sequence is misaligned with the intrinsically hierarchical and branching structure of natural discourse, leading to inefficient context utilization and a loss of coherence during extended interactions involving topic shifts or instruction refinements. To address this limitation, we introduce Context-Agent, a novel framework that models multi-turn dialogue history as a dynamic tree structure. This approach mirrors the inherent non-linearity of conversation, enabling the model to maintain and navigate multiple dialogue branches corresponding to different topics. Furthermore, to facilitate robust evaluation, we introduce the Non-linear Task Multi-turn Dialogue (NTM) benchmark, specifically designed to assess model performance in long-horizon, non-linear scenarios. Our experiments demonstrate that Context-Agent enhances task completion rates and improves token efficiency across various LLMs, underscoring the value of structured context management for complex, dynamic dialogues. The dataset and code is available at GitHub.
Problem

Research questions and friction points this paper is trying to address.

non-linear dialogue
discourse structure
context management
dialogue coherence
large language models
Innovation

Methods, ideas, or system contributions that make the work stand out.

dynamic discourse tree
non-linear dialogue
context management
multi-turn dialogue
structured conversation modeling
🔎 Similar Papers
No similar papers found.
J
Junan Hu
Shandong University, China
S
Shudan Guo
Shandong University, China
W
Wenqi Liu
Shandong University, China
J
Jianhua Yin
Shandong University, China
Yinwei Wei
Yinwei Wei
Shandong University | National University of Singapore
Multimedia ComputingInformation RetrievalRecommender System