TRACE: Learning to Compute on Graphs

📅 2025-09-26
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
In graph representation learning, modeling the functional behavior—i.e., computational capability—of computation graphs poses a fundamental challenge: mainstream message-passing neural networks (MPNNs) and standard Transformers fail to capture both position-awareness and hierarchical structure inherent in computations. This work introduces TRACE, a novel paradigm that designs a hierarchical Transformer architecture aligned with the natural layering of computational workflows and proposes a function transfer learning objective, decoupling global function modeling into two stages: local approximation and cross-graph generalization. Evaluated end-to-end on electronic circuit graph benchmarks, TRACE significantly outperforms all existing models, achieving, for the first time, high-accuracy and strong generalization in modeling the functional behavior of complex computation graphs.

Technology Category

Application Category

📝 Abstract
Learning to compute, the ability to model the functional behavior of a computational graph, is a fundamental challenge for graph representation learning. Yet, the dominant paradigm is architecturally mismatched for this task. This flawed assumption, central to mainstream message passing neural networks (MPNNs) and their conventional Transformer-based counterparts, prevents models from capturing the position-aware, hierarchical nature of computation. To resolve this, we introduce extbf{TRACE}, a new paradigm built on an architecturally sound backbone and a principled learning objective. First, TRACE employs a Hierarchical Transformer that mirrors the step-by-step flow of computation, providing a faithful architectural backbone that replaces the flawed permutation-invariant aggregation. Second, we introduce extbf{function shift learning}, a novel objective that decouples the learning problem. Instead of predicting the complex global function directly, our model is trained to predict only the extit{function shift}, the discrepancy between the true global function and a simple local approximation that assumes input independence. We validate this paradigm on electronic circuits, one of the most complex and economically critical classes of computational graphs. Across a comprehensive suite of benchmarks, TRACE substantially outperforms all prior architectures. These results demonstrate that our architecturally-aligned backbone and decoupled learning objective form a more robust paradigm for the fundamental challenge of learning to compute on graphs.
Problem

Research questions and friction points this paper is trying to address.

Modeling computational graph functional behavior accurately
Addressing architectural mismatch in graph representation learning
Capturing position-aware hierarchical computation in graphs
Innovation

Methods, ideas, or system contributions that make the work stand out.

Hierarchical Transformer mirrors step-by-step computation flow
Function shift learning decouples complex global function prediction
Architecturally-aligned backbone replaces flawed permutation-invariant aggregation
🔎 Similar Papers
No similar papers found.
Ziyang Zheng
Ziyang Zheng
Shanghai Jiao Tong University
Signal ProcessingInverse ProblemPhotonic Computing
J
Jiaying Zhu
The Chinese University of Hong Kong
J
Jingyi Zhou
The Chinese University of Hong Kong
Q
Qiang Xu
The Chinese University of Hong Kong