🤖 AI Summary
This work addresses three critical failure modes in multi-objective graph self-supervised learning—negative transfer, utility drift, and objective starvation—arising from conflicting objectives, training instability, and the neglect of certain tasks. To mitigate these issues, the authors propose ControlG, a novel framework that, for the first time, integrates control theory into this domain by formulating multi-objective optimization as a sequential resource allocation problem. ControlG employs a PID controller coupled with a Pareto-aware log-hypervolume planner to dynamically schedule the optimization weights and timing of each objective. By incorporating difficulty estimation and modeling inter-objective antagonism, the method consistently outperforms state-of-the-art approaches across nine datasets and produces interpretable scheduling logs that reveal which objectives predominantly drive the learning process.
📝 Abstract
Can multi-task self-supervised learning on graphs be coordinated without the usual tug-of-war between objectives? Graph self-supervised learning (SSL) offers a growing toolbox of pretext objectives: mutual information, reconstruction, contrastive learning; yet combining them reliably remains a challenge due to objective interference and training instability. Most multi-pretext pipelines use per-update mixing, forcing every parameter update to be a compromise, leading to three failure modes: Disagreement (conflict-induced negative transfer), Drift (nonstationary objective utility), and Drought (hidden starvation of underserved objectives). We argue that coordination is fundamentally a temporal allocation problem: deciding when each objective receives optimization budget, not merely how to weigh them. We introduce ControlG, a control-theoretic framework that recasts multi-objective graph SSL as feedback-controlled temporal allocation by estimating per-objective difficulty and pairwise antagonism, planning target budgets via a Pareto-aware log-hypervolume planner, and scheduling with a Proportional-Integral-Derivative (PID) controller. Across 9 datasets, ControlG consistently outperforms state-of-the-art baselines, while producing an auditable schedule that reveals which objectives drove learning.