🤖 AI Summary
Large language models (LLMs) face dual bottlenecks in large-scale graph analysis: limited context length and topology-agnostic reasoning. To address these, we propose GraphAdapt—a dynamic, toolchain-driven adaptive graph analysis framework. Methodologically, it introduces: (1) a progressive graph distillation mechanism that compresses graphs efficiently while preserving task-critical structural information; (2) a structure-aware test-time adaptation strategy, leveraging spectral graph analysis to dynamically orchestrate lightweight adapters and tool invocation sequences; and (3) reinforcement learning–based optimization of tool coordination. Extensive experiments on multiple large-scale graph benchmarks demonstrate that GraphAdapt significantly improves LLMs’ accuracy and scalability on subgraph matching and anomaly detection tasks—achieving an average 12.7% gain over state-of-the-art methods—while reducing inference overhead by 38%.
📝 Abstract
Large Language Models (LLMs) face significant limitations when applied to large-scale graphs, struggling with context constraints and inflexible reasoning. We present GraphChain, a framework that enables LLMs to analyze complex graphs through dynamic sequences of specialized tools, mimicking human exploratory intelligence. Our approach introduces two key innovations: (1) Progressive Graph Distillation, a reinforcement learning mechanism that generates optimized tool sequences balancing task relevance with information compression, and (2) Structure-aware Test-Time Adaptation, which efficiently tailors tool selection strategies to diverse graph topologies using spectral properties and lightweight adapters without costly retraining. Experiments show GraphChain significantly outperforms prior methods, enabling scalable and adaptive LLM-driven graph analysis.