🤖 AI Summary
This work addresses the limitation of existing graph learning approaches, which typically operate in isolation within a single modality and task, thereby hindering the cross-task and cross-modal reuse of structural knowledge. To overcome this, the authors propose G-Substrate, a novel framework that models graph structures as persistent, shareable substrates. By unifying structural patterns and employing a role-interleaved training strategy, G-Substrate enables collaborative learning across multiple tasks and modalities. This approach facilitates the continuous accumulation and transfer of graph-structured knowledge, consistently outperforming both isolated training and conventional multi-task learning methods across diverse domains, modalities, and tasks.
📝 Abstract
Graphs provide a natural representation of relational structure that arises across diverse domains. Despite this ubiquity, graph structure is typically learned in a modality- and task-isolated manner, where graph representations are constructed within individual task contexts and discarded thereafter. As a result, structural regularities across modalities and tasks are repeatedly reconstructed rather than accumulated at the level of intermediate graph representations. This motivates a representation-learning question: how should graph structure be organized so that it can persist and accumulate across heterogeneous modalities and tasks? We adopt a representation-centric perspective in which graph structure is treated as a structural substrate that persists across learning contexts. To instantiate this perspective, we propose G-Substrate, a graph substrate framework that organizes learning around shared graph structures. G-Substrate comprises two complementary mechanisms: a unified structural schema that ensures compatibility among graph representations across heterogeneous modalities and tasks, and an interleaved role-based training strategy that exposes the same graph structure to multiple functional roles during learning. Experiments across multiple domains, modalities, and tasks show that G-Substrate outperforms task-isolated and naive multi-task learning methods.