🤖 AI Summary
This work proposes RAMP, a novel approach that addresses the limitations of existing methods which compress node text in text-rich graphs into static embeddings, thereby causing information loss and decoupling structural reasoning from original semantics. RAMP uniquely integrates a large language model (LLM) as a graph-native message aggregation operator, dynamically anchoring raw textual content during message passing and refining neighbor messages to achieve deep integration of structural propagation and contextual text understanding. Departing from conventional feature extraction paradigms, RAMP introduces a dual-representation mechanism grounded in raw text, enabling unified support for both discriminative and generative tasks. Extensive experiments demonstrate that RAMP achieves state-of-the-art performance across multiple text-rich graph benchmarks, effectively bridging the gap between graph-structured message passing and deep textual reasoning.
📝 Abstract
Text-rich graphs, which integrate complex structural dependencies with abundant textual information, are ubiquitous yet remain challenging for existing learning paradigms. Conventional methods and even LLM-hybrids compress rich text into static embeddings or summaries before structural reasoning, creating an information bottleneck and detaching updates from the raw content. We argue that in text-rich graphs, the text is not merely a node attribute but the primary medium through which structural relationships are manifested. We introduce RAMP, a Raw-text Anchored Message Passing approach that moves beyond using LLMs as mere feature extractors and instead recasts the LLM itself as a graph-native aggregation operator. RAMP exploits the text-rich nature of the graph via a novel dual-representation scheme: it anchors inference on each node's raw text during each iteration while propagating dynamically optimized messages from neighbors. It further handles both discriminative and generative tasks under a single unified generative formulation. Extensive experiments show that RAMP effectively bridges the gap between graph propagation and deep text reasoning, achieving competitive performance and offering new insights into the role of LLMs as graph kernels for general-purpose graph learning.