🤖 AI Summary
Existing graph prompting methods suffer from poor generalizability across diverse graph structures (e.g., generic graphs, protein graphs, hypergraphs) and rely heavily on parameter-intensive fine-tuning.
Method: We propose a parameter-free, universal graph encoding framework inspired by the Fock space in quantum many-body theory. For the first time, we introduce Fock-state representations into graph prompting, mapping arbitrary graph structures into interpretable, composable discrete sequences—enabling zero-shot adaptation across graph types without training. Integrated with frozen large language models, prefix tuning, and graph serialization, our approach achieves end-to-end graph-aware question answering without modifying model architecture.
Contribution/Results: Experiments demonstrate significant improvements over state-of-the-art graph prompting baselines across multiple graph-structured tasks. The method exhibits strong cross-structure generalization and plug-and-play usability, eliminating the need for task-specific parameter optimization.
📝 Abstract
Recent results show that modern Large Language Models (LLM) are indeed capable of understanding and answering questions about structured data such as graphs. This new paradigm can lead to solutions that require less supervision while, at the same time, providing a model that can generalize and answer questions beyond the training labels. Existing proposals often use some description of the graph to create an ``augmented''prompt fed to the LLM. For a chosen class of graphs, if a well-tailored graph encoder is deployed to play together with a pre-trained LLM, the model can answer graph-related questions well. Existing solutions to graph-based prompts range from graph serialization to graph transformers. In this work, we show that the use of a parameter-free graph encoder based on Fock space representations, a concept borrowed from mathematical physics, is remarkably versatile in this problem setting. The simple construction, inherited directly from the theory with a few small adjustments, can provide rich and informative graph encodings, for a wide range of different graphs. We investigate the use of this idea for prefix-tuned prompts leveraging the capabilities of a pre-trained, frozen LLM. The modifications lead to a model that can answer graph-related questions -- from simple graphs to proteins to hypergraphs -- effectively and with minimal, if any, adjustments to the architecture. Our work significantly simplifies existing solutions and generalizes well to multiple different graph-based structures effortlessly.