Feedback Control for Multi-Objective Graph Self-Supervision

📅 2026-02-04
📈 Citations: 0
Influential: 0
📄 PDF
🤖 AI Summary
This work addresses three critical failure modes in multi-objective graph self-supervised learning—negative transfer, utility drift, and objective starvation—arising from conflicting objectives, training instability, and the neglect of certain tasks. To mitigate these issues, the authors propose ControlG, a novel framework that, for the first time, integrates control theory into this domain by formulating multi-objective optimization as a sequential resource allocation problem. ControlG employs a PID controller coupled with a Pareto-aware log-hypervolume planner to dynamically schedule the optimization weights and timing of each objective. By incorporating difficulty estimation and modeling inter-objective antagonism, the method consistently outperforms state-of-the-art approaches across nine datasets and produces interpretable scheduling logs that reveal which objectives predominantly drive the learning process.

Technology Category

Application Category

📝 Abstract
Can multi-task self-supervised learning on graphs be coordinated without the usual tug-of-war between objectives? Graph self-supervised learning (SSL) offers a growing toolbox of pretext objectives: mutual information, reconstruction, contrastive learning; yet combining them reliably remains a challenge due to objective interference and training instability. Most multi-pretext pipelines use per-update mixing, forcing every parameter update to be a compromise, leading to three failure modes: Disagreement (conflict-induced negative transfer), Drift (nonstationary objective utility), and Drought (hidden starvation of underserved objectives). We argue that coordination is fundamentally a temporal allocation problem: deciding when each objective receives optimization budget, not merely how to weigh them. We introduce ControlG, a control-theoretic framework that recasts multi-objective graph SSL as feedback-controlled temporal allocation by estimating per-objective difficulty and pairwise antagonism, planning target budgets via a Pareto-aware log-hypervolume planner, and scheduling with a Proportional-Integral-Derivative (PID) controller. Across 9 datasets, ControlG consistently outperforms state-of-the-art baselines, while producing an auditable schedule that reveals which objectives drove learning.
Problem

Research questions and friction points this paper is trying to address.

multi-objective
graph self-supervised learning
objective interference
training instability
temporal allocation
Innovation

Methods, ideas, or system contributions that make the work stand out.

feedback control
multi-objective optimization
graph self-supervised learning
temporal allocation
PID controller
🔎 Similar Papers
No similar papers found.