Colorful Talks with Graphs: Human-Interpretable Graph Encodings for Large Language Models

📅 2026-02-11
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
Large language models (LLMs) exhibit limited performance on graph-structured tasks due to weak structural awareness, permutation invariance, and insufficient capacity for complex relational reasoning. This work proposes a human-interpretable graph-to-text encoding method that, for the first time, maps refined Weisfeiler–Lehman similarity classes to semantic color tokens instead of conventional numerical symbols, thereby effectively integrating graph structural information into LLM inputs. By leveraging structure-aware prompt engineering, the approach substantially enhances model performance on both algorithmic reasoning and predictive graph tasks, with particularly notable gains on tasks requiring global structural understanding. The method demonstrates consistent effectiveness across both synthetic and real-world datasets.

Technology Category

Application Category

📝 Abstract
Graph problems are fundamentally challenging for large language models (LLMs). While LLMs excel at processing unstructured text, graph tasks require reasoning over explicit structure, permutation invariance, and computationally complex relationships, creating a mismatch with the representations of text-based models. Our work investigates how LLMs can be effectively applied to graph problems despite these barriers. We introduce a human-interpretable structural encoding strategy for graph-to-text translation that injects graph structure directly into natural language prompts. Our method involves computing a variant of Weisfeiler-Lehman (WL) similarity classes and maps them to human-like color tokens rather than numeric labels. The key insight is that semantically meaningful and human-interpretable cues may be more effectively processed by LLMs than opaque symbolic encoding. Experimental results on multiple algorithmic and predictive graph tasks show the considerable improvements by our method on both synthetic and real-world datasets. By capturing both local and global-range dependencies, our method enhances LLM performance especially on graph tasks that require reasoning over global graph structure.
Problem

Research questions and friction points this paper is trying to address.

graph reasoning
large language models
structural representation
permutation invariance
graph-to-text translation
Innovation

Methods, ideas, or system contributions that make the work stand out.

graph encoding
human-interpretable
Weisfeiler-Lehman
large language models
color tokens
🔎 Similar Papers
No similar papers found.