🤖 AI Summary
Large language models (LLMs) struggle to efficiently process structured graph data, as existing approaches rely on costly text-based graph serialization, extensive post-training, and fragile cross-modal alignment. Method: This paper proposes an implicit graph knowledge injection paradigm that eschews runtime access to raw graphs or explicit modality alignment; instead, it employs lightweight LoRA-based fine-tuning to internalize graph-structural knowledge—particularly relational path semantics—directly into LLM parameters. Contribution/Results: By bypassing conventional graph-to-text conversion and cross-modal alignment bottlenecks, the method achieves significant improvements over state-of-the-art methods across multiple graph reasoning benchmarks. It offers low inference overhead, strong generalization to unseen graph structures, and scalability to large-scale graphs—establishing a novel, practical pathway for deploying LLMs on real-world graph tasks.
📝 Abstract
Large Language Models (LLMs) have demonstrated remarkable capabilities in modeling sequential textual data and generalizing across diverse tasks. However, adapting LLMs to effectively handle structural data, such as knowledge graphs or web data, remains a challenging problem. Some approaches adopt complex strategies to convert graphs into text sequences, resulting in significant token overhead and rendering them impractical for large-scale graphs. Others introduce additional modules to encode graphs into fixed-size token representations for LLMs. However, these methods typically require large-scale post-training on graph-text corpus and complex alignment procedures, yet often yield sub-optimal results due to poor modality alignment. Inspired by in-parameter knowledge injection for test-time adaptation of LLMs, we propose GRIP, a novel framework that equips LLMs with the ability to internalize complex relational information from graphs through carefully designed fine-tuning tasks. This knowledge is efficiently stored within lightweight LoRA parameters, enabling the fine-tuned LLM to perform a wide range of graph-related tasks without requiring access to the original graph at inference time. Extensive experiments across multiple benchmarks validate the effectiveness and efficiency of our approach.