DANCE: Dynamic, Available, Neighbor-gated Condensation for Federated Text-Attributed Graphs

πŸ“… 2026-01-23
πŸ“ˆ Citations: 0
✨ Influential: 0
πŸ“„ PDF
πŸ€– AI Summary
This work addresses the challenges of high computational overhead from large language models (LLMs), suboptimal performance of one-shot graph compression, and limited interpretability in federated textual attributed graph learning. To this end, we propose DANCE, a novel framework featuring a β€œmodel-in-the-loop” dynamic graph compression mechanism that adaptively updates graph structures round-by-round. DANCE integrates locally auditable evidence packages to enhance the traceability and transparency of the LLM-driven compression process. By jointly leveraging graph compression, federated learning, and neighbor gating, DANCE achieves an average accuracy improvement of 2.33% with only an 8% node compression rate across eight benchmark datasets, while simultaneously reducing LLM token consumption by 33.42%.

Technology Category

Application Category

πŸ“ Abstract
Federated graph learning (FGL) enables collaborative training on graph data across multiple clients. With the rise of large language models (LLMs), textual attributes in FGL graphs are gaining attention. Text-attributed graph federated learning (TAG-FGL) improves FGL by explicitly leveraging LLMs to process and integrate these textual features. However, current TAG-FGL methods face three main challenges: \textbf{(1) Overhead.} LLMs for processing long texts incur high token and computation costs. To make TAG-FGL practical, we introduce graph condensation (GC) to reduce computation load, but this choice also brings new issues. \textbf{(2) Suboptimal.} To reduce LLM overhead, we introduce GC into TAG-FGL by compressing multi-hop texts/neighborhoods into a condensed core with fixed LLM surrogates. However, this one-shot condensation is often not client-adaptive, leading to suboptimal performance. \textbf{(3) Interpretability.} LLM-based condensation further introduces a black-box bottleneck: summaries lack faithful attribution and clear grounding to specific source spans, making local inspection and auditing difficult. To address the above issues, we propose \textbf{DANCE}, a new TAG-FGL paradigm with GC. To improve \textbf{suboptimal} performance, DANCE performs round-wise, model-in-the-loop condensation refresh using the latest global model. To enhance \textbf{interpretability}, DANCE preserves provenance by storing locally inspectable evidence packs that trace predictions to selected neighbors and source text spans. Across 8 TAG datasets, DANCE improves accuracy by \textbf{2.33\%} at an \textbf{8\%} condensation ratio, with \textbf{33.42\%} fewer tokens than baselines.
Problem

Research questions and friction points this paper is trying to address.

Federated Graph Learning
Text-Attributed Graphs
Graph Condensation
Large Language Models
Interpretability
Innovation

Methods, ideas, or system contributions that make the work stand out.

Federated Graph Learning
Text-Attributed Graphs
Graph Condensation
Large Language Models
Interpretability
πŸ”Ž Similar Papers
No similar papers found.