🤖 AI Summary
This work proposes HS2C, a novel framework that integrates graph homophily principles into large language model (LLM) reasoning—addressing the limitations of existing graph-LLM approaches that rely on random sampling and consequently suffer from noise sensitivity and unstable inference. HS2C achieves global hierarchical graph partitioning through structural entropy minimization to identify semantically homogeneous communities, which then guide the LLM in performing context-aware, differentiated aggregation. This joint compression of structure and semantics enables more efficient and robust reasoning. Extensive experiments across 10 node-level and 7 graph-level benchmarks demonstrate that HS2C significantly improves compression rates while simultaneously enhancing both accuracy and stability of inference, exhibiting strong scalability and generalization across diverse tasks.
📝 Abstract
Large language models (LLMs) have demonstrated promising capabilities in Text-Attributed Graph (TAG) understanding. Recent studies typically focus on verbalizing the graph structures via handcrafted prompts, feeding the target node and its neighborhood context into LLMs. However, constrained by the context window, existing methods mainly resort to random sampling, often implemented via dropping node/edge randomly, which inevitably introduces noise and cause reasoning instability. We argue that graphs inherently contain rich structural and semantic information, and that their effective exploitation can unlock potential gains in LLMs reasoning performance. To this end, we propose Homophily-aware Structural and Semantic Compression for LLMs (HS2C), a framework centered on exploiting graph homophily. Structurally, guided by the principle of Structural Entropy minimization, we perform a global hierarchical partition that decodes the graph's essential topology. This partition identifies naturally cohesive, homophilic communities, while discarding stochastic connectivity noise. Semantically, we deliver the detected structural homophily to the LLM, empowering it to perform differentiated semantic aggregation based on predefined community type. This process compresses redundant background contexts into concise community-level consensus, selectively preserving semantically homophilic information aligned with the target nodes. Extensive experiments on 10 node-level benchmarks across LLMs of varying sizes and families demonstrate that, by feeding LLMs with structurally and semantically compressed inputs, HS2C simultaneously enhances the compression rate and downstream inference accuracy, validating its superiority and scalability. Extensions to 7 diverse graph-level benchmarks further consolidate HS2C's task generalizability.