🤖 AI Summary
This study investigates how graph description ordering affects large language models’ (LLMs) performance on graph reasoning tasks. We systematically evaluate four structured graph representations—adjacency list, edge list, adjacency matrix, and node-neighbor list—across six canonical graph tasks (e.g., shortest path, connectivity) and six state-of-the-art LLMs, using standardized prompting templates and controlled-variable experiments. Our results reveal, for the first time, that description ordering significantly impacts LLMs’ structural understanding of graphs, with pronounced task-specific sensitivity (e.g., high for shortest path, low for connectivity) and strong cross-model consistency. Based on these findings, we propose a *task-driven description ordering optimization paradigm*—a lightweight, zero-parameter, and transferable technique. Empirical evaluation shows that optimized ordering yields an average accuracy improvement of 12.7% across tasks and models, establishing it as an effective, implementation-efficient enhancement for graph reasoning with LLMs.
📝 Abstract
Large language models (LLMs) have achieved significant success in reasoning tasks, including mathematical reasoning and logical deduction. Among these reasoning tasks, graph problems stand out due to their complexity and unique structural characteristics, attracting considerable attention from researchers. Previous studies have explored LLMs' graph reasoning abilities through various techniques, such as different encoding methods for graph structures and the use of carefully designed prompts. However, a critical factor has been mostly overlooked: the prompt sequential order in which graph descriptions are presented to the models. In this study, we present the first comprehensive analysis of how the order of graph descriptions impacts LLM performance. Specifically, we comprehensively evaluate four graph description orders across six graph problems using six mainstream LLMs. The results reveal that: (1) ordered graph descriptions significantly improve LLMs' comprehension of graph structures; (2) the robustness of LLMs to graph description order varies across different tasks; and (3) the impact of graph order on performance is closely related to the inherent characteristics of tasks. This study provides a critical advancement in the application of LLMs for solving graph-related problems, paving the way for future research to optimize model performance through strategic graph description ordering.