Mixture of Demonstrations for Textual Graph Understanding and Question Answering

📅 2026-03-23
📈 Citations: 0
Influential: 0
📄 PDF
📝 Abstract
Textual graph-based retrieval-augmented generation (GraphRAG) has emerged as a powerful paradigm for enhancing large language models (LLMs) in domain-specific question answering. While existing approaches primarily focus on zero-shot GraphRAG, selecting high-quality demonstrations is crucial for improving reasoning and answer accuracy. Furthermore, recent studies have shown that retrieved subgraphs often contain irrelevant information, which can degrade reasoning performance. In this paper, we propose MixDemo, a novel GraphRAG framework enhanced with a Mixture-of-Experts (MoE) mechanism for selecting the most informative demonstrations under diverse question contexts. To further reduce noise in the retrieved subgraphs, we introduce a query-specific graph encoder that selectively attends to information most relevant to the query. Extensive experiments across multiple textual graph benchmarks show that MixDemo significantly outperforms existing methods.
Problem

Research questions and friction points this paper is trying to address.

GraphRAG
demonstration selection
textual graph
retrieval-augmented generation
irrelevant information
Innovation

Methods, ideas, or system contributions that make the work stand out.

Mixture-of-Experts
GraphRAG
demonstration selection
query-specific graph encoder
textual graph understanding